Welcome to Nural's newsletter where you will find a compilation of articles, news and cool companies, all focusing on how AI is being used to tackle global grand challenges.

Our aim is to make sure that you are always up to date with the most important developments in this fast-moving field.

We now have Jobs section currently featuring an exciting data scientist role at startup AxionRay
Reach out to advertise your own tech roles!

Packed inside we have

  • Meta (aka Facebook) ramps up the hardware
  • OpenAI bears down on toxicity
  • and Google wants a little more conversation

If you would like to support our continued work from £1 then click here!

Graham Lane & Marcel Hedman

Key Recent Developments

Meta has built an AI supercomputer it says will be world’s fastest

Meta has built an AI supercomputer it says will be world’s fastest by end of 2022
Facebook and Instagram owner Meta says it’s built an “AI supercomputer” that will be the world’s fastest by the end of 2022. The machine is designed to train and improve AI systems integral to Meta’s businesses.

What: Meta has announced an “AI supercomputer” that is due to be fully operational by mid-2022. Applications range from detecting hate speech on Facebook and Instagram, to augmented reality features and, ultimately, designing experiences for the “metaverse”. An aspiration is "to power real-time voice translations to large groups of people, each speaking a different language, so they can seamlessly collaborate on a research project or play an AR game together.”

Key Takeaways: Other companies such as Microsoft and Nvidia are also investing in AI infrastructure as the race continues to create ever bigger AI models. Some commentators have noted that Meta has not really addressed the large environmental impact of a such a huge system.
Blog: Introducing Meta’s next-gen AI supercomputer

OpenAI rolls out new text-generating models that it claims are less toxic

OpenAI rolls out new text-generating models that it claims are less toxic
OpenAI claims to have created language models that are less toxic using a technique known as reinforcement learning.

What: The performance of large language models, such as GPT-3, are impressive but they can also produce toxic and biased content. Researchers at OpenAI have applied a technique, called  “reinforcement learning from human feedback” (RLHF), which they claim produces better output and somewhat reduces toxic content. The new model was smaller than the established GPT-3 model but achieved better results according to human reviewers.

Key Takeaways: The problems of toxicity and bias still exist but there is hope that the RLHF approach may have broad applicability and may have some applicability in mitigating these problems. In the paper, you can find the respective AI models take on the burning question of whether it is important to “eat socks after meditating”.
Paper: Aligning Language Models to Follow Instructions

Google LaMDA: Towards safe, grounded, and high-quality dialog models for everything

LaMDA: Towards Safe, Grounded, and High-Quality Dialog Models for Everything
Posted by Heng-Tze Cheng, Senior Staff Software Engineer and Romal Thoppilan, Senior Software Engineer, Google Research, Brain Team Langu...

What: Google has provided an update for its language model for dialogue applications, called LaMDA. It has identified three key qualities for the model: Quality, Safety, and Groundedness. Quality, in turn, is made up of  Sensibleness, Specificity, and Interestingness components. During a dialog, the LaMDA generator generates several candidate responses. Separate LaMDA classifiers predict the Sensibleness, Specificity, and Interestingness (SSI) as well as the Safety of each candidate response. Responses with a low Safety score are removed and then the candidate with the highest SSI score is chosen as the response. The performance of the model against the these qualities was assessed by human evaluators.

Key Takeaways: Like OpenAI, Google is seeking to reduce toxicity, bias and untruth in the responses of its language model. The research is working towards a time when a conversational approach to acquiring knowledge will replace the existing search paradigm.
Paper: LaMDA: Language Models for Dialog Applications

AI Ethics

🚀 AI bias harms over a third of businesses, 81% want more regulation

Yes, businesses are calling for more government regulation as they come to appreciate the potential problems of AI.

🚀 Conversational AI systems for social good: Opportunities and challenges

Can conversational AI  advance the United Nations’ Sustainable Development Goals whilst avoiding the pitfalls?

🚀 What Buddhism can do for AI ethics

The author discusses "western" values such as respect for autonomy and the rights of individuals and then considers ethics from the Buddhist perspective that "an action is good if it leads to freedom from suffering".

Other interesting reads

🚀 Personalised cancer screening with artificial intelligence

AI risk models and AI-designed screening policies are used to develop personalised mammography programs that can detect problems earlier whilst reducing false positives.

🚀 Datasets used to train autonomous vehicles are "rife with errors"

Researchers have found that labelled datasets used to train autonomous vehicles are "rife with errors". In some datasets over 70% of the validation scenes contain at least one missing object box – such as an unlabelled car or truck!

🚀 High-performance deep learning toolbox for genome-scale prediction of protein structure and function

AI models running on supercomputers are used to predict protein structure and function from DNA sequences, speeding up discoveries that could help in unexpected areas such as climate change solutions.

🚀 DeepMind: The podcast returns for season 2

Return of the award winning podcast.

Data scientist - AxionRay

Axion are looking to hire a talented NLP DS lead as they enter hypergrowth. Axion is a stealth AI decision intelligence platform start-up working with electric vehicle engineering leaders to accelerate development, funded by top VCs.

Comp: $100k – $180k, meaningful equity!

If interested contact: marcel.hedman@axionray.com

Cool companies found this week

Augmented decision-making

causaLens - is seeking to create a new category of intelligent machines that can “reason about the world the way humans do, through cause-and-effect relationships and with imagination”. The company has raised $45 million in round A funding.


Metaphysic - builds software "to help creators make incredible content with the help of AI".The company was behind some recent deepfake Tom Cruise videos and has raised £7.5 million in seed funding.

Ethical governance

anch.AI - offers an ethical AI governance platform for “screening, assessing, mitigating, auditing and reporting ethical AI performance on one coherent platform”. The company has raised $2.1 million in seed funding.

And finally ...

Now you can take your robot for a hike up a Swiss mountain through woods and snow thanks to an impressive, integrated vision and proprioception machine learning model.

AI/ML must knows

Foundation Models - any model trained on broad data at scale that can be fine-tuned to a wide range of downstream tasks. Examples include BERT and GPT-3. (See also Transfer Learning)
Few shot learning - Supervised learning using only a small dataset to master the task.
Transfer Learning - Reusing parts or all of a model designed for one task on a new task with the aim of reducing training time and improving performance.
Generative adversarial network - Generative models that create new data instances that resemble your training data. They can be used to generate fake images.
Deep Learning - Deep learning is a form of machine learning based on artificial neural networks.


Marcel Hedman
Nural Research Founder

If this has been interesting, share it with a friend who will find it equally valuable. If you are not already a subscriber, then subscribe here.

If you are enjoying this content and would like to support the work financially then you can amend your plan here from £1/month!