The Weekly #6

This week has an animal theme as we cover AI at Duolingo with their Owl mascot and an update to Meta's Llama.

The Weekly #6

Duolingo’s Lily Becomes Chatty

Personally, this is a really exciting story. We talk a lot about how AI is becoming more and more prevalent in our day-to-day lives, and this is a great example, and I do mean daily. At the time of writing, I’ve logged in to the Duolingo app to learn French for 252 days straight. So it was with some interest that I read this article on Forbes about how Duolingo will be letting their users chat with one of the main characters, Lily, with the help of OpenAI’s GPT-4o model.

This is a really good use case for GenAI. Anyone who has used Duolingo knows that most of the learning is translating from a bunch of text options, with the odd listening and spoken lessons thrown in. This is all well and good, but it’s no substitute for conversing back and forth with a native speaker.

I’m yet to try this new feature, but it’s great to see AI being used positively for learning.

Meta Connect 2024 And Their Latest Llama

Last week, Meta held is 11th annual conference, Connect. Whilst there were lots of announcements about more VR headsets, I was more interested in the update of their Large Language Model, Llama 3.2. OpenAI might get more columns inches in the regular press for their ChatGPT models, but Meta’s own models are a very significant part of the current AI boom. According to Meta, Llama 3.1 has been downloaded 350 million times, which is already 10x more than compared to this time last year. On the back of those numbers, you can expect Llama 3.2 to accelerate this even further.

A huge difference between Llama and ChatGPT is that Llama is open-source and can be downloaded and used on your own hardware. This is vital for regulated industries where uploading sensitive data to external servers just isn’t possible. It also allows you to fine-tune the model to fit your use case better, improving accuracy and performance. Llama 3.2 is multimodal, allowing it to understand its environment. This dovetails nicely with Meta’s other products, such as the Quest headset, where they are pushing the development of virtual and mixed reality. Having AI being able to understand the full context of the visuals and text that you are looking at brings huge developments in the usefulness in how it responds. The scope of the use cases this can bring is vast, and I imagine they’ll be some real exciting developments in this area.

That said, if you’re reading this from the UK or an EU country, don’t get too excited because for now, Meta AI is not available here due to uncertainty around regulations. This opens up another whole debate about whether regulation stifles innovation, but that’s not for now.