Beyond the success of Kotlin: a documentary about how and why Kotlin succeeded in the world of Android development.

Sam Altman is the CEO of the Year, a Scandalous Release of Google Gemini, Regulation of AI by EU Law: Top AI News of the Week (Plus About a Nun Suddenly)

Our latest AI Digest covers the biggest breaking AI news of the week. Anywhere Club community leader, Viktar Shalenchanka, comments on the key stories.

Anywhere Club community leader, Viktar Shalenchanka


#1 — Sam Altman is the CEO of the Year

Taylor Swift is the Person of the Year according to Time magazine. Why is this interesting in the context of AI news? One of the other nominees for the magazine's award was Sam Altman, CEO (and former CEO, and CEO again) of OpenAI. That Altman did not win the nomination indicates to me that most people still do not grasp the importance of AI to the future of humanity. The mere fact of his nomination, however, shows that ignoring the AI boom is impossible. This is evidenced by Altman's victory in a less prominent category: CEO of the Year.

#2 — Google Gemini release

Another significant event of the year was the long-awaited release of Google Gemini, a generative model that competes with GPT-4. A video demonstration accompanied the Gemini launch. In the video, the model – in real-time – describes human actions, plays "Rock, Paper, Scissors," and predicts gestures. Consistent with Google tradition, however, the demo was not without a scandal. It turns out that the beautiful video shown during the release was not exactly fake, but was somewhat enhanced. The company admitted that the video was edited for marketing purposes and that, in reality, Gemini behaves much less intelligently than shown during the demo. This doesn't negate the fact of the emergence of another strong competitor to OpenAI, which is very cool.

#3 — AI Law in the European Union

In the European Union, competition between AI models will become more complicated. The EU adopted the world's first comprehensive set of rules regulating artificial intelligence. Significant financial penalties are envisaged for violations of the law once it takes effect. The law follows a “risk-based approach” and mandates risk assessment for all models that fall into the "high risk" category, including GPT 3.5. Interestingly, there are less stringent rules for general-purpose open-source models. And, of course, there are many restrictions on working with personal data. On one hand, regulation in the AI field has been anticipated since the public release of ChatGPT, which, essentially, still operates in a gray area. On the other hand, the EU is, as always, trying to create a useful legal framework, despite the fact that we do not yet have strong models, even.

A Surprise Recommendation!

Since this section is informative but also grants some discretion to the authors, I am taking advantage of this opportunity to offer you (and the editor) a small (AI related) surprise. I want to recommend a limited television drama series about what would happen if humanity started living according to ChatGPT instructions tomorrow. The fantastic and slightly surreal series, called "Mrs. Davis," tells the story of a nun who fights against a chatbot that manages the lives of the planet's population. Enjoy watching!

Watch more news on Anywhere Club YouTube channel.

Prompt Engineering Foundations
Master the art of crafting, fine-tuning, and formatting effective prompts for LLMs to improve accuracy and boost productivity.
View coursearrow-right-blue.svg

Want to learn more effectively with AI?

Take our free course
Related posts
Get the latest updates on the platforms you love