Welcome to Nural's newsletter where you will find a compilation of articles, news and cool companies, all focusing on how AI is being used to tackle global grand challenges.
Our aim is to make sure that you are always up to date with the most important developments in this fast-moving field.
We now have Jobs section currently featuring an exciting data scientist role at startup AxionRay
Reach out to advertise your own tech roles!
Packed inside we have
- OpenAI tackling the Mathematics Olympiad
- DeepMind pushes the boundaries of automated coding
- and Google robots tackle novel tasks without training
If you would like to support our continued work from £1 then click here!
Graham Lane & Marcel Hedman
Key Recent Developments
OpenAI solving (some) formal Math Olympiad problems
What: OpenAI researchers describe an AI system that is capable of automatically solving multiple challenges of increasing difficulty drawn from high school mathematics competitions. It also solved two challenging problems from the elite International Mathematical Olympiad (IMO).
The approach uses an open source problem programming language and theorem prover called LEAN. This is integrated with a derivative of OpenAI GPT-3 AI transformer model that is specialised in generating tactics for mathematical proofs. The proofs generated by the system are used to iterate the training of the AI transformer model which continues to improve its performance.
Key Takeaways: The paper indicates that the approach is currently only capable of chaining together 2 or 3 non-trivial steps in a proof. It is sometimes capable of solving challenging IMO problems, but this is not consistent. This is a proposed area for future research. The research follows hot on the heels of a paper from DeepMind in December 2021 that used Machine Learning to detect patterns in mathematics.
Paper: Formal Mathematics Statement Curriculum Learning
DeepMind claims its new code-generating system is competitive with human programmers
What: DeepMind has unveiled AlphaCode, an AI model that can write computer code in a range of different languages. The system participated in competitions hosted on Codeforces, a platform for programming contests. DeepMind claims that, in terms of overall performance, it was within the top 28% of users who participated in a contest in the previous six months. The founder of Codeforces said that AlphaCode performed “at the level of a promising new competitor”. AlphaCode is based on a well-established AI Transformer architecture. The impressive results have been achieved mainly through engineering expertise in areas such as data management, clever training of the model, huge datasets and ample computing power.
Key Takeaways: Automation of computer coding is considered commercially attractive. OpenAI have already launched Codex, an AI model for translating natural language commands into code, and Microsoft has integrated this into GitHub as the Copilot feature. Now DeepMind has advanced the state of the art in this arena with a system that can “read the natural language descriptions of an algorithmic problem and produce code that not only compiles, but is correct”.
Blog: Competitive programming with AlphaCode
Can robots follow instructions for new tasks?
What: Google researchers have achieved the remarkable result of training a robot so that it completed 24 (out of 28) novel tasks in novel settings without any additional training. The approach was similar to training a driverless car. First a human operates the robot to carry out certain labelled tasks for the initial training of the robot’s neural network. Then the robot undertakes the same tasks under close supervision with the human stepping in and providing correction if necessary. The robot’s neural network was trained on 100 such tasks. Finally, the robot was given 28 novel tasks and it completed 24 of these correctly. An example of one of these new tasks was to pick up grapes and place them into a ceramic bowl. The training tasks involved doing other things with the grapes and placing other items into the ceramic bowl. The grapes and the ceramic bowl had not appeared in the same scene during training.
Key Takeaways: AI language models demonstrate some degree of “generalisation”. For example, if a translation model is trained with sentences like “pick up a cup” and “push a bowl” then the model should also translate “push a cup” correctly. The researchers have demonstrated that this type of generalisation can also be extended to real robots i.e. being able to compose unseen object-object and task-object pairs.
Paper: Zero-Shot Task Generalization with Robotic Imitation Learning
A review of the bi-annual report from the renowned Montreal AI Ethics Institute (MAIEI).
A review of AI research compiled by a Microsoft cross-company initiative on AI Ethics and Effects in Engineering and Research, working with collaborators in academia and industry.
Describes the open source Call for Code for Racial Justice project that was initially incubated by the black community in IBM as part of the response to the Black Lives Matter movement.
Other interesting reads
Describes an ML-based approach that predicts the complex that will form when two proteins bind together. This is claimed to be 80-500 times faster than current state of the art methods.
A robot carried out surgery on pigs, including intricate and delicate tasks such as suturing two ends of an intestine.
It's not a grand challenge but the differing fortunes of Meta (which lost a record $232 billion in one day) compared to profit-making Google and Apple provide a stark reminder of the value of owning the data.
The Alexa prize is a prestigious annual chatbot competition. In this article the Stanford team reflect on: Understanding and predicting user dissatisfaction; Handling offensive users; and Increasing user initiative
Data scientist - AxionRay
Axion are looking to hire a talented NLP DS lead as they enter hypergrowth. Axion is a stealth AI decision intelligence platform start-up working with electric vehicle engineering leaders to accelerate development, funded by top VCs.
Comp: $100k – $180k, meaningful equity!
If interested contact: email@example.com
Cool companies found this week
Celestial AI - has a platform that enables the next generation of high-performance computing by means of a proprietary technology that uses light for data movement both within a chip and between chips. The company raised $56 million in round A funding.
annotell - provides a platform for training, testing, measuring, and arguing the safety of the perception systems of autonomous driving systems. They describe it as “a vision test for cars getting their drivers license” and have raised $24 million in round A funding.
AI/ML must knows
Foundation Models - any model trained on broad data at scale that can be fine-tuned to a wide range of downstream tasks. Examples include BERT and GPT-3. (See also Transfer Learning)
Few shot learning - Supervised learning using only a small dataset to master the task.
Transfer Learning - Reusing parts or all of a model designed for one task on a new task with the aim of reducing training time and improving performance.
Generative adversarial network - Generative models that create new data instances that resemble your training data. They can be used to generate fake images.
Deep Learning - Deep learning is a form of machine learning based on artificial neural networks.
Nural Research Founder
If this has been interesting, share it with a friend who will find it equally valuable. If you are not already a subscriber, then subscribe here.
If you are enjoying this content and would like to support the work financially then you can amend your plan here from £1/month!