Welcome to Nural's newsletter where we explore how AI is being used to tackle global grand challenges.

As always in the newsletter you will find a compilation of articles, news and cool companies all focusing on using AI to tackle global grand challenges.

Packed inside we have

  • What it's like for an elderly person living with an AI-powered robo-helper
  • OpenAI's Codex providing automated coding support
  • Plus, Twitter ran a competition to locate bias within its image cropping algorithm
  • and more...

If you would like to support our continued work from £1 then click here!

Graham Lane & Marcel Hedman


Key Recent Developments


ElliQ is 93-year-old Juanita's friend. She's also a robot

ElliQ is 93-year-old Juanita’s friend. She’s also a robot
New technologies aim to help comfort, entertain and inform seniors but critics say machines trying to mimic human intimacy raise ethical issues

What: Interview with a 93-year old woman, Juanita, who has used a commercial, AI-powered robot companion called ElliQ for about 2 years. ElliQ is advertised as a “sidekick for happier ageing” and is aimed at older adults without dementia. Unlike Alexa, ElliQ is proactive.

Key Takeaways: Juanita refers to the robot as female, always thanks her and says “goodnight”. But ultimately she describes her as an “added ornament to my life”. Critics question the value of interactions with “no mutuality, no real shared experience”. They are concerned that such a companion may reduce human interactions. What sort of future do we want for our elderly?


OpenAI upgrades its natural language AI coder Codex and kicks off private beta

OpenAI upgrades its natural language AI coder Codex and kicks off private beta – TechCrunch
OpenAI has already made some big changes to Codex, the AI-powered coding assistant the company announced last month. The system now accepts commands in plain English and outputs live, working code, letting someone build a game or web app without so much as naming a variable. A few lucky coders (and,…

What: OpenAI has launched a significant upgrade to its AI-powered coding assistant, Codex. The system now accepts commands in plain English and outputs live, working code! For example, a games developer may say “make the boulder fall from the sky”. Codex will then drop a boulder from the top of the screen without any prior instruction as to what “the sky” is.

Key Takeaway: Some commentators are heralding a new era of low-code interaction with computers whereas OpenAI itself has identified a range of hazards associated with the technology. These hazards span across "safety, security, and economic factors including producing code misaligned with user intent. Will we find ways to limit the risks while maximising the benefits of this technology?


Twitter's photo-cropping algorithm preferred young, beautiful, and light-skinned faces

Twitter’s photo-cropping algorithm prefers young, beautiful, and light-skinned faces
Twitter’s photo-cropping algorithm favors faces that are young, slim, and light-skinned, according to the results of the company’s first ever algorithmic bias bug bounty. The competition was an industry first to find algorithmic bias in machine learning and artificial intelligence systems.

What: In March 2021, Twitter phased out its automated system for cropping images in image preview boxes. The previous system worked by presenting the most ‘visually interesting’ area but the concern was that the algorithm demonstrated bias across gender and race in doing so.

Key Takeaway: To the delight and praise of many, Twitter recently ran a public competition with a cash prize to test this hypothesis and the results do indeed demonstrate a number of different biases. For example the algorithm showed preference to light-skinned faces over dark-skinned faces. To resolve this bias for the future, the long-term solution is likely to involve more human involvement in image processing.


AI Ethics

🚀 How computer vision works — and why it’s plagued by bias

The release of ImageNet was a watershed moment in computer vision but is now fingered as a culprit in problems of bias.

🚀 AI datasets are prone to mismanagement, study finds

New research identifies problematic image collections that continue to be used in research despite being taken offline.

🚀 Combatting Anti-Blackness in the AI Community

Review of invisible barriers, social discrepancies, and recruitment policies that contribute to discrimination in the AI community.

🚀 Can you trust artificial intelligence?

When does “nudge” become manipulation in an AI-based system? How can companies build trust in AI?

Other interesting reads

🚀 NeurIPS 2021 Workshop: Tackling Climate Change with Machine Learning

Call for short papers using machine learning to address problems in climate mitigation, adaptation, and modeling.

🚀 AI may diagnose dementia in a day

Currently it can take several scans and tests to diagnose dementia delaying treatment and causing a range of problems for patients.

🚀 Researchers use artificial intelligence to unlock extreme weather mysteries

A new machine learning approach helps scientists understand why extreme precipitation days in the Midwest are becoming more frequent.


Cool companies found this week

Industry and transport

Fetch.ai - a Cambridge-based startup combining blockchain, AI and IoT with application in areas such as predictive maintenance. The company is working with Bosch

Applied data science

Dataiku - specialises in simple, graphical tools for data science pipelines with a mission to democratise access to data for individuals and enterprises. The company has raised an additional $400 million in funding.

Climate change

Climate-X - A London-based startup that gained £1.1 million in initial funding. The company offers "Location-specific climate risk intelligence" by combining the latest climate models with real world data in a geo-spatial system.


AI rendering of an "art deco Buddhist temple"

A new Twitter account - Images Generated By AI Machines (@images_ai) - publishes strange AI-generated visions.

AI/ML must knows

Few shot learning - Supervised learning using only a small dataset to master the task.
Transfer Learning - Reusing parts or all of a model designed for one task on a new task with the aim of reducing training time and improving performance.
Tensorflow/keras/pytorch - Widely used machine learning frameworks
Generative adversarial network - Generative models that create new data instances that resemble your training data. They can be used to generate fake images.
Deep Learning - Deep learning is a form of machine learning based on artificial neural networks.

Best,

Marcel Hedman
Nural Research Founder
www.nural.cc

If this has been interesting, share it with a friend who will find it equally valuable. If you are not already a subscriber, then subscribe here.

If you are enjoying this content and would like to support the work financially then you can amend your plan here from £1/month!