By Alexander de Ranitz
Hi everyone! Starting this month, Datakami has a new team member: me! I’m Alexander de Ranitz and I will be working part-time at Datakami as desk researcher. My job description includes developing internal tools and prototypes, helping the rest of the team by researching any questions they might have, and writing this newsletter! Next to working at Datakami, I am pursuing a Master’s degree in Artificial Intelligence at Radboud University in Nijmegen, the Netherlands. Besides generative AI, I am mainly interested in combining insights from neuroscience and AI to create more powerful and efficient artificial systems. I also enjoy playing the bass guitar and reading sci-fi. Nice to meet you!
—Alexander
(Pictured: Family of language models in watercolour style, generated with SDXL)
Introducing The Claude 3 Family
Anthropic recently released the newest series of their AI models: Claude 3 Haiku, Sonnet, and Opus, going from small, fast, and cheap, to large, powerful, and expensive. They claim their new models are faster, more intelligent, less likely to refuse to answer, and can now handle a wide range of multimodal formats. From what I have read online, users seem to be particularly impressed by Claude 3’s image understanding and its writing skills, saying it comes across as more natural and humanlike than other LLMs. This blog post by Zvi Mowshowitz provides a nice overview of benchmark information and user experiences with the new Claude 3 models.
Open Release of Grok-1
xAI, an AI company led by Elon Musk, just released the model weights and architecture of Grok-1, the base model used for Grok, the AI assistant available in X. This model is a base model, so it is not fine-tuned for any specific task, like the Grok you see in X. Since xAI has released the weights, you now run this model locally. Well, at least if you own a small data centre. Grok-1 is a Mixture-of- Experts model with a whopping 314B parameters, meaning you need a serious amount of compute to run it, or even load it into memory. So while this release might not offer practical applications for most people, it might motivate other companies to open-source their models as well, which is what Elon Musk hopes for.
Approaching Human-Level Forecasting With Language Models
In a recent paper titled Approaching Human-Level Forecasting with Language Models, researchers tried to get LLMs to accurately give probability estimates for questions such as “Will Trump issue another NFT Collection before the 2024 Presidential Election?”. Out of the box, LLMs are not very good at this, often performing at chance level. By adding an information retrieval system and a reasoning pipeline to find and incorporate useful information, the LLMs were able to equal the performance of competitive forecasters. Considering that these competitive forecasters usually are experts in their field and that researching these questions takes a lot of time, the AI's performance is rather impressive. Using AI for forecasting is not just of academic interest: start-up futuresearch already offers an AI forecaster for geo-political forecasters.
The European Parliament Has Approved The AI Act
After a long process of negotiating and revising, the EU’s AI Act has finally passed its last hurdle. However, after a good amount of lobbying, the AI Act has lost some of its sharper edges. For example, some applications that would be classified as high-risk in previous versions will no longer face the same rules and restrictions if the AI will only “perform a narrow procedural task” or is otherwise unlikely to cause harm. Regardless, the AI Act is still the most extensive and ambitious piece of AI legislation in the world. It is expected that the AI Act will not only influence AI development here in the EU, but that it will have a global impact: the so-called Brussels effect.
Co-Intelligence By Ethan Mollick
Ethan Mollick, a professor of AI and innovation who you might know from his great newsletter oneusefulthing, has recently written a book, titled Co-Intelligence, on living and working with AI. Mollick explains how you can use AI to enhance your productivity and enjoyment of work by using AI tools as a co-worker, teacher, and more. Exemplifying this, he has written a blog post about how he used LLMs to help write this book, making it a more pleasant and efficient experience.
Dune: Part Two
The sequel to the 2021 Dune movie has recently hit the theatres. Unlike a lot of other science fiction stories, AI surprisingly doesn’t really exist in Dune. However, AI actually plays an important role in Dune’s backstory. If you’re interested in learning more about why we no longer see AI in Dune, see this article (no real spoilers for the new movie).
INNOVATE festival
In February, Datakami joined the INNOVATE AI meetup 2024 in De Lindenberg in Nijmegen. A group of entrepreneurs and scientists showed 2400 audience members how AI can be applied to all kinds of societal problems, what the risks are, and how academia and companies can work together to innovate more with AI. Judith talked about what companies need to start experimenting with AI themselves.
Early stage AImeetup in Apeldoorn
Datakami introduced itself to investors and regional startups at a meetup for early-stage AI companies in Apeldoorn on March 27th. We explained what Datakami can do for clients and showed some recent wins at Replicate. Thanks to regional investor Oost NL, AI-hub Oost-Nederland and venture capital fund CapitalT for organizing this!
Show us your AI problems!
Starting May 2024, we’ve got some room to take on new projects. If you're in a tech startup with a product related to generative AI, and you're VC-backed or already have a comfortable revenue stream, we're talking to you! Datakami can join your engineering team remotely, tackling difficult issues as they arise, or we can solve a specific technical problem you've got on your plate. Reach out to Yorick or Judith if you're interested!
Subscribe to our newsletter "Creative Bot Bulletin" to receive more of our writing in your inbox. We only write articles that we would like to read ourselves.