Welcome to Nural's newsletter where we explore how AI is being used to tackle global grand challenges.

As always in the newsletter you will find a compilation of articles, news and cool companies all focusing on using AI to tackle global grand challenges.

Packed inside we have

  • Elon Musk says Tesla is working on humanoid robots!
  • a Stanford-based centre researching and critiquing AI 'foundation models'
  • a start-up working at the brain computer interface
  • and more...

If you would like to support our continued work from £1 then click here!

Graham Lane & Marcel Hedman

Key Recent Developments

Introducing the center for research on AI foundation models

Introducing the Center for Research on Foundation Models (CRFM)
This new center at Stanford convenes scholars from across the university to study the technical principles and societal impact of foundation models.

What: A new center at Stanford has been established that convenes scholars from across the university to study the technical principles and societal impact of foundation models. Foundation models are models trained on broad data at scale such that they can be adapted to a wide range of downstream tasks (e.g., BERT, GPT-3).

The Centre explains that “Foundation models have demonstrated impressive behaviour, but can fail unexpectedly, harbor biases, and are poorly understood. Nonetheless, they are being deployed at scale”.

Key Takeaways: This research center could be the the start of establishing key professional norms around these large pretrained ("foundation") models that are being deployed widely. Importantly, these norms could help to mitigate concerns around biases and tackle key questions around data ownership and privacy.

Paper: On the Opportunities and Risks of Foundation Models

Elon Musk says Tesla is working on humanoid robots

Elon Musk says Tesla is working on humanoid robots
Tesla CEO Elon Musk has announced that his company is working on humanoid robots, leveraging the company’s experience with automated machinery in its factories as well as the hardware and software that power the Autopilot system in its cars.

What:  At Tesla's AI day they "unveiled a humanoid robot called the Tesla Bot that runs on the same AI used by Tesla's fleet of autonomous vehicles. A functioning version of the robot didn't make an appearance during Musk's reveal, though a slightly bizarre dance by a performer dressed like a Tesla Bot did." - CNET.

A prototype of the robot could be coming as soon as next year and would be made to handle “tasks that are unsafe, repetitive or boring.”

Key Takeaways: As robotics becomes more advanced and more humanoid, some of the existential questions once reserved for sci-fi's may become more pertinent. How can we ensure the ethical and regulatory frameworks progress as quickly as the tech?

The recent "Tesla Bot" launch featured an unusual dance routine.

The above is a human not a robot

Amazon taps its SocialBot challenge to boost conversational AI

Amazon taps its SocialBot challenge to boost conversational AI
To boost conversational AI (and embrace university research), Amazon recently hosted its fourth Alexa Prize SocialBot Grand Challenge.

What: The winners of the annual Amazon Alexa Prize SocialBot Challenge have been announced. The Challenge is to create a socialbot that can “converse coherently and engagingly for 20 minutes with humans on a range of current events and popular topics" while maintaining good quality. Examples of this year’s dialogue are in this Amazon video. “All three finalists achieved [some] 20-minute conversations, which had only ever been achieved once [before]. Additionally, the average time achieved across social bots [was] more than double the average time from 2020’s finals”.

Key Takeaways: The state of the art of conversational AI continues to progress at pace. This Amazon announcement follows Google’s recent launch of their “breakthrough conversation technology” called LaMDA.

AI Ethics

🚀 Is AI good or bad – and who decides?

"Deployment of AI is pushing ahead faster than the ethical framework needed to control it." - Seemingly benign tech becomes dangerous when deployed at scale.

🚀 Protecting the Ecosystem: AI, Data and Algorithms

Conference on 9 September organised by Montreal AI Ethics Institute: How AI is being leveraged for a greener future.

🚀 AI accountability: Who's responsible when AI goes wrong?

Answer: It’s not clear. That is why companies need to protect themselves by codifying their ethical standards.

Other interesting reads

🚀 How machine learning is helping us fine-tune climate models to reach unprecedented detail

A more comprehensive picture of ocean satellite data and more fine-grained climate projections are achieved using ML.

🚀 Can AI make a better nuclear fusion reactor?

Nuclear physics may be one of machine learning's newest frontiers

🚀 Tesla Autopilot: US opens official investigation into self-driving tech

There is a particular concern that Autopilot is unable to deal with emergency vehicles attending an incident.

🚀 John Deere buys autonomous tractor startup Bear Flag Robotics

Meanwhile, back at the ranch, another step towards self-driving tractors within the confines of a farm.

Cool companies found this week

Mark Zuckerberg wants Facebook to become a metaverse company (an online world where people can game, work and communicate in a virtual environment). Here are some companies working in this space.

Kernel – produces non-invasive, full-coverage, optical headsets that can be used to record real-time patterns of brain activity.

Spatial – specialises in virtual meeting rooms with 3D avatars that can interact with objects such as a whiteboard.

Synchron - makes an implantable brain-computer interface (BCI). It has recently been granted permission by the U.S. Food and Drug Administration to run a clinical trial with human patients.

AI/ML must knows

Foundation Models - any model trained on broad data at scale that can be fine-tuned to a wide range of downstream tasks. Examples include BERT and GPT-3. (See also Transfer Learning)
Few shot learning - Supervised learning using only a small dataset to master the task.
Transfer Learning - Reusing parts or all of a model designed for one task on a new task with the aim of reducing training time and improving performance.
Generative adversarial network - Generative models that create new data instances that resemble your training data. They can be used to generate fake images.
Deep Learning - Deep learning is a form of machine learning based on artificial neural networks.


Marcel Hedman
Nural Research Founder

If this has been interesting, share it with a friend who will find it equally valuable. If you are not already a subscriber, then subscribe here.

If you are enjoying this content and would like to support the work financially then you can amend your plan here from £1/month!