Welcome to Nural's newsletter where you will find a compilation of articles, news and cool companies, all focusing on how AI is being used to tackle global grand challenges.
Our aim is to make sure that you are always up to date with the most important developments in this fast-moving field.
Packed inside we have
- UK government health minister promotes "health data inclusivity"
- Robotic dog mounted with a rifle demonstrated at military conference
- plus, a drone races through a forest without a map ...
If you would like to support our continued work from £1 then click here!
Graham Lane & Marcel Hedman
Key Recent Developments
AI projects to tackle racial inequality in UK healthcare
What: The UK Health Secretary has committed to addressing health inequalities in the NHS including ensuring that health datasets adequately represent people from ethnic minority backgrounds. Another project will use AI to investigate factors behind adverse maternity incidents involving ethnic minority mothers.
Key Takeaways: An emphasis on health data inclusivity is a solid starting point for the development of AI health systems that benefit all. Initiatives using AI to address specific areas of health inequality are realistic and can be used to achieve measurable results.
They’re putting guns on robot dogs now
What: At the recent Association of the United States Army annual conference, a robotic quadruped from Ghost Robotics was demonstrated carrying a “special purpose unmanned rifle” made by Sword International. Currently the trigger is fully operated by a remote human. The U.S. military already uses unarmed robotic dogs to patrol the perimeter of an air base.
Key Takeaways: The CEO of Ghost Robotics described the robot as a “walking tripod” and emphasised that they don’t have responsibility for the “payload” (i.e. the gun) on the robot. Boston Dynamics, makers of the well-known Spot robot, have declared that they will not weaponise their robots. But in the absence of clear regulation, other companies are sure to step into this space.
Twitter's algorithm favours right-leaning politics, research finds
What: Twitter investigated how its algorithm recommends political content to users, analysing millions of tweets sent in 2020. They found that mainstream parties and outlets on the political right enjoyed higher levels of "algorithmic amplification" compared with their counterparts on the left. The researchers could not explain this pattern but will investigate further. Additionally, they did not find evidence that the algorithm promotes "extreme ideologies more than mainstream political voices".
Key Takeaway: Twitter has acted responsibly carrying out research and being transparent with the results. A key step to avoid bias in AI is to continue to monitor operational systems for unanticipated shifts.
A new paper explores the complexities of how to teach a machine to behave ethically, demonstrating this with an ethical API interface.
a) Responsible investing in tech; b) targeted tech regulations; c) tech ethics to be mandatory in higher education
A new SaaS-based model offers smaller companies ethical AI consultancy and a bespoke toolkit to monitor compliance.
Other interesting reads
The recording of the recent Climate Change AI webinar is now available on YouTube.
U.S. court papers apparently reference a previously unreported heist that took place in Hong Kong.
A facial recognition system enabling passengers to pay their fare at turnstiles with cameras has been rolled out at over 240 stations.
Descriptions of the winning entries are available in the Deeplearning.ai blog
Cool companies found this week
Robotic Dogs as a Service
Luke Skywalker eat your heart out ...
AI/ML must knows
Foundation Models - any model trained on broad data at scale that can be fine-tuned to a wide range of downstream tasks. Examples include BERT and GPT-3. (See also Transfer Learning)
Few shot learning - Supervised learning using only a small dataset to master the task.
Transfer Learning - Reusing parts or all of a model designed for one task on a new task with the aim of reducing training time and improving performance.
Generative adversarial network - Generative models that create new data instances that resemble your training data. They can be used to generate fake images.
Deep Learning - Deep learning is a form of machine learning based on artificial neural networks.
Nural Research Founder
If this has been interesting, share it with a friend who will find it equally valuable. If you are not already a subscriber, then subscribe here.
If you are enjoying this content and would like to support the work financially then you can amend your plan here from £1/month!