Welcome to Nural's newsletter focusing on how AI is being used to tackle global grand challenges.

Note that there will be a break in the newsletter next week for the summer holiday period!

Packed inside this week, we have

  • British supermarket chain under fire over its use of ‘Orwellian’ facial recognition technology
  • Man sues city of Chicago, claiming its AI wrongly imprisoned him
  • and A.I. rapidly transforming biological research - DeepMind builds on prior AlphaFold work

If you would like to support our continued work from £1 then click here!

Marcel Hedman


Key Recent Developments


Supermarket chain under fire over its use of ‘Orwellian’ facial recognition technology

Supermarket chain under fire for “Orwellian” and “deeply unethical” facial recognition tech
Brother Watch described the Biometric system as “Orwellian in the extreme,” “deeply unethical” and “chilling.”

What: The Co-op supermarket chain, has come under fire from Big Brother Watch, a privacy group, which described the group’s FaceWatch security system as “Orwellian in the extreme,” “deeply unethical” and “chilling.”

Big Brother Watch said: “The supermarket is adding customers to secret watch-lists with no due process, meaning shoppers can be spied on, blacklisted across multiple stores and denied food shopping despite being entirely innocent.”

Co-op have stated their aim is to "balance our customers’ rights with the need to protect our colleagues and customers from unacceptable violence and abuse."

Key Takeaway: AI has great potential to support in protection of staff across retailers. In this case, the system only stores images of those who have previously  been banned from stores following investigation. Perhaps the use of the system in this way have some validity... However, serious questions around privacy as well how to handle false positive identifications will need to be navigated to convince regulators and the public that this is not intrusive and unethical


Man sues city of Chicago, claiming its AI wrongly imprisoned him

Man Sues City of Chicago, Claiming Its AI Wrongly Imprisoned Him
Michael Williams has filed a lawsuit against Chicago on grounds that an AI policing program called ShotSpotter led to his wrongful arrest.

What: The theme of AI for identification continues with this article. A 65-year-old Chicago resident Michael Williams has filed a lawsuit against the city on grounds that a controversial AI program called ShotSpotter led to his essentially evidence-less arrest, following a year in jail. The federal suit alleges that officers put "blind faith" in the gunshot-locating tech, which not only led to an undue arrest, but ultimately stopped police from pursuing other leads as well.

The program in question is the same which has previously been discussed in Nural newsletters called ShotSpotter. The technology claims to be able to locate gunshots with a "97 percent aggregate accuracy rate for real-time detections across all customers". Independent investigations seem less convinced with reports that 89% of alerts lack evidence.

Key Takeaway: Making decisions about a person's guilt and innocence while incorporating AI raises strong unresolved questions on liability. Who is responsible when an AI system being used in this way goes wrong?

What is fundamentally clear is that it's vital for those using the technology to understand the underlying strengths and limitations before applying it in such critical situations.


AI Ethics & social good

🚀 A.I. is rapidly transforming biological research—with big implications for everything from drug discovery to agriculture to sustainability - Major development (DeepMind)

🚀 How rangers are using AI to help protect India's tigers

🚀 Man sues city of Chicago, claiming its AI wrongly imprisoned him

🚀 Documents reveal advanced AI tools google is selling to Israel

Other interesting reads

🚀 U.S. Army Research Lab Expands Artificial Intelligence and Machine Learning Contract with Palantir for $99.9M

🚀 A Brooklyn-based artist is harnessing the power of DALL-E to reimagine roadways to be more friendly to pedestrians and bikes

🚀 Meta AI open-sourced Theseus, a library for incorporating domain knowledge in ML models

🚀 Department of Energy Announces Latest Challenge in Competition Aimed at Identifying Power Grid Solutions


Cool companies found this week

Insurance

Tractable - Computer vision tools for claim assessment within the Automotive and property industries. They have reached unicorn status(>$1bn valuation).

Health

Diagnostic Robotics - Medical-grade AI triage and clinical-predictions platform


...and Finally

LHS: original design, RHS DALL-E redesign to be more eco-friendly - Zach Katz/OpenAI

AI/ML must knows

Foundation Models - any model trained on broad data at scale that can be fine-tuned to a wide range of downstream tasks. Examples include BERT and GPT-3. (See also Transfer Learning)
Few shot learning - Supervised learning using only a small dataset to master the task.
Transfer Learning - Reusing parts or all of a model designed for one task on a new task with the aim of reducing training time and improving performance.
Generative adversarial network - Generative models that create new data instances that resemble your training data. They can be used to generate fake images.
Deep Learning - Deep learning is a form of machine learning based on artificial neural networks.

Best,

Marcel Hedman
Nural Research Founder
www.nural.cc

If this has been interesting, share it with a friend who will find it equally valuable. If you are not already a subscriber, then subscribe here.

If you are enjoying this content and would like to support the work financially then you can amend your plan here from £1/month!