advertisement

Guy Uses Machine Learning To Help Him Count How Many Times Noku From Wadiwa Wepa Moyo Says “Hesi”

Stumbling upon this pointless but intriguing video made my day yesterday. Usually when we talk about machine learning and programming – it’s rarely this light-hearted.

A student from HIT – Tatenda Mushaya made use of Machine Learning techniques to attempt to figure out how many times Noku from Wadiwa Wepa Moyo says “Hesi”.

If you’ve watched the show, you’ll know why the “Hesi” has become equally iconic and infamous resulting in some of funniest social media reactions;

advertisement
https://twitter.com/muvahaze/status/1252709939606364166

Social media reactions aside, Tatenda followed the steps below;

  1. Collect images from the internet which can be done using code.
  2. Resize the images
  3. Detect faces use script
  4. Crop the detected face
  5. Pick only Noku’s face
  6. Make Noku encodings
  7. Detect Noku’s face.
  8. find the `Hie` subtitle

You can see the script he created to do this on Github

In the video above, Tatenda explained that the process was complicated because the script was trying to read text on varying backgrounds and as a result he could only pick up 4 Hesi’s.

Fellow programmers, comment below on how Tatenda can make his script better at identifying Noku’s “Hesi’s”.


Quick NetOne, Telecel, Africom, And Econet Airtime Recharge

If anything goes wrong, click here to enter your query.


WhatsApp Discussions

Click to join a Techzim WhatsApp group:
https://chat.whatsapp.com/JMSZXv5nfApJsufbWRBaUB

If you find the group full, please notify us on +263 715 071 199 and we'll update the link.


9 thoughts on “Guy Uses Machine Learning To Help Him Count How Many Times Noku From Wadiwa Wepa Moyo Says “Hesi”

  1. I would frame this as an audio extraction and detection problem. “Hesi” could possibly be said whilst the subject isn’t within frame or not facing the camera. I’ve only watched one episode, so I can’t speak to the distribution of such events.

    Audio can be trickier to work with though, as most AI tutorials, examples e.t.c focus on computer vision. One option would be to use text recognition to identify locations of the subtitle “Hie” (regardless of the speaker), then extract the audio within the neighbouring areas and process that accordingly, to identify the speaker.

  2. Dude, this is impressive. This is not stupid at all!
    I can see something like this taking off and being useful some day. I home I will remember to hit you up when I find an alternative and more effective way of doing it.

  3. Even if it’s “stupid”, it’s stupidly funny. I quite enjoyed. Things line these lead to interesting use cases. Remember mould gave us penicillin. It was a botched experiment. Kinda stupid I’d say 😁

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.