Newsletter #71 - Meta giving away large GPT-3 like NLP-model

Welcome to Nural's newsletter where you will find a compilation of articles, news and cool companies, all focusing on how AI is being used to tackle global grand challenges.

Our aim is to make sure that you are always up to date with the most important developments in this fast-moving field.

Packed inside we have

  • Meta has built a massive new language AI—and it’s giving it away for free
  • Google fires another AI researcher for questioning findings. Company says otherwise
  • and Data Scientists given the Superpowers they need for unstructured data machine learning

If you would like to support our continued work from £1 then click here!

Graham Lane & Marcel Hedman


Key Recent Developments


Meta has built a massive new language AI—and it’s giving it away for free

Meta has built a massive new language AI—and it’s giving it away for free
Facebook’s parent company is inviting researchers to pore over and pick apart the flaws in its version of GPT-3

What: "Meta’s AI lab has created a massive new language model that shares both the remarkable abilities and the harmful flaws of OpenAI’s pioneering neural network GPT-3. And in an unprecedented move for Big Tech, it is giving it away to researchers—together with details about how it was built and trained." The model is called Open Pretrained Transformer (OPT-175B).

Key Takeaway: Meta have made a move at the forefront of open science, revealing not just the NLP-model but also the methods behind its creation. The silo of large language models by the  few with the means to train them has been a growing problem as others are cut off from benefits without forking up cash. It's good to see steps by these large institutions to address this.


Google Fires Another AI Researcher for Questioning Findings. Company Says Otherwise

Google Fires Another AI Researcher for Questioning Findings. Company Says Otherwise
The firing follows the dismissal of two high-profile AI researchers.

What: "Google fired an AI researcher who called into question a well-known paper the company published in 2020, a third high-profile termination in less than two years, although the company says his removal was for other reasons." The researcher has been removed from a division of Google known as Google Brain. Google Brain is a group housing Google's grandest AI research ambitions.

Key takeaway: Following the firing of one of the leaders of its Ethical A.I. team, Timnit Gebru, Google has had many eyes focused on how they handle dispute over research. The importance of open critiquable research is without question... but is some research more critiquable than others in reality?


AI 4 good

🚀 An ambitious initiative of the European Union to create a digital twin – an interactive computer simulation – of our planet.

🚀 Imperial College London has devised a method – Machine Learning – to calculate the carbon emission of all consumer goods, incorporating their full-life cycle and enabling ranking comparable goods accordingly

🚀 AI researchers at Mayo Clinic use the Apple Watch to detect silent, weakening heart disease

Other interesting reads

🚀 Autonomous ship makes second attempt at Atlantic crossing

🚀 Galileo Launches to Give Data Scientists the Superpowers They Need for Unstructured Data Machine Learning

🚀 Soundcloud acquires Musiio, an AI music curator, to improve discovery


Cool companies found this week

Climate

Verge ag - AI-Powered Farm Operations Planning Software designed to Fill Critical Gap in Current Autonomous Farming Solutions. They have recently raised $7.5 million Series A funding from Yamaha Motor Ventures.

Red Sea Farms- Agtech startup that recently raised $18.5 million to reduce the carbon and water footprint of our food sector by designing, developing and delivering sustainable agriculture technologies for harsh environments.


And finally...

AI/ML must knows

Foundation Models - any model trained on broad data at scale that can be fine-tuned to a wide range of downstream tasks. Examples include BERT and GPT-3. (See also Transfer Learning)
Few shot learning - Supervised learning using only a small dataset to master the task.
Transfer Learning - Reusing parts or all of a model designed for one task on a new task with the aim of reducing training time and improving performance.
Generative adversarial network - Generative models that create new data instances that resemble your training data. They can be used to generate fake images.
Deep Learning - Deep learning is a form of machine learning based on artificial neural networks.

Best,

Marcel Hedman
Nural Research Founder
www.nural.cc

If this has been interesting, share it with a friend who will find it equally valuable. If you are not already a subscriber, then subscribe here.

If you are enjoying this content and would like to support the work financially then you can amend your plan here from £1/month!