Reflecting on our time at Blue Earth Summit

Blue Earth Summit is over, the dust has settled, and we’re reflecting on an incredible time at the Conference for a second time. This year we were an ecosystem partner, and we ran four great sessions across three days with over 80 participants! 

We ran two 90 minute interactive Catalyst Conversation sessions with the incredible Ben Keene and Tom Greenwood titled ‘Can businesses really use AI to accelerate positive impact?’.

Specifically we discussed questions like:

❓ How has technology changed your life?

🫠 How do you feel about AI technology?

🧐 Is AI technology ethically neutral?

🫣 Is it the responsibility of individuals to deploy the technology for good outcomes, or do those who created it bear responsibility?

🚀 If we assume AI isn’t going anywhere anytime soon, how can we leverage it for positive impact?

What were the takeaways?

Ben took us through a thought provoking process, asking us to think about how technology has changed our lives, asked our initial thoughts and feelings on AI technology and then gave us a number of examples where AI technology is being deployed for positive outcomes 

Ben asked to think about the concept that “AI is happening, machines will outsmart us, bad things will happen [but the potential for good is huge]...”. He acknowledged that this feels scary and overwhelming, but situated in the current contexts (where we need a hugely transformative scale for a just green transition), argued that the power of AI to process big data quickly and help humans transform industries is huge. 

Ben talked through the huge downsides to AI (sources linked below)

  • Massive energy consumption 

  • Tech Inequality (bot builders hold the power)

  • Narrow Solutions, rather than system change

  • Possible runaway bad intelligence (think Skynet)

He also pointed to a number of examples of technology being used to do incredible things (sources linked below)

  • Accurately predicting weather 

  • Improving healthcare

  • Identifying pollution

  • Increasing biodiversity

  • Helping vulnerable people

  • Tackling climate injustice 

After letting the audience discuss for a while, Tom took the stage and made us take a step back!

AI technology has become an umbrella term that under which technology tends to fall into the following two broad categories:

  • Machine Learning Models – These are essentially specialised algorithms working from specialised datasets which are able to process and generate information, data and written language in tasks at a powerful rate. Some example use cases are image and speech recognition, crunching data, fraud detection, autonomous cars or creating transcriptions from audio files. Some of these tools can also work with coding languages and mathematics to understand and generate information on request.

  • Generative or Generalised Tools – These are tools that can generate rich media on demand, such as images of a particular subject, audio (including the spoken word and music), as well as artificially generated video content. Language Models that generate new written content technically also fall into this category (such as Chat GPT or Gemini AI)

Tom pointed out that machine learning has been around for much longer than generalised AI, which has really ramped up since the introduction of the most recent form of ChatGPT in late 2022. Much of the controversy surrounding AI mentioned in the bullet points above are more relevant to generalised AI- so Tom asked: Are we really anti or pro AI? Or are we anti or pro differing types of technology categorised as AI - which are really quite different. 

We should expect these technologies (if they keep developing at the current rate) to eventually become Artificial General Intelligence (AGI), which is technology that is roughly equally intelligent as humans across all aspects, and eventually Artificial Super Intelligence (ASI), which is technology more intelligent than the most intelligent humans on all intellectual tasks. 

Tom has created 7 principles for responsible AI usage that are well worth reading for full details. Briefly, these are:

  1. Mindfulness - Ask if it actually makes sense to use AI for this? Are there other, more appropriate tools to use in this situation or is AI genuinely the most appropriate tool for the task, taking into account the social and environmental context around AI usage.

  2. Human Oversight - Are you making sure that the output from AI usage is being checked properly by humans to make sure it is reliable, safe, and effective. AI hallucination is a real issue that has had real world consequences

  3. Screening for Bias - Are you checking the output from AI tools for bias? These models have bias built in due to how they are trained, and without due care, AI tools can exacerbate existing conscious and unconscious biases.

  4. Privacy - Are you checking whose data we're using? This can be either for AI input ( have we asked the relevant person if we can use their data) or AI output (where has this data been collected from, and did they consent?). 

  5. Transparency of Authorship - Is it clear where this information came from? If there is content that is produced by AI, whether it be text, audio, imagery, or video, then this should be clearly highlighted for transparency

  6. Intellectual Property - Are we infringing on other people’s IP? This is tricky as these models are trained on existing datasets, but should avoid referencing the work of any people (e.g. writers or artists) without their prior permission in our AI usage.

  7. Avoiding Fake Media - Are we creating realistic media that is based on the likeness of real individuals? Can these images be used in the future to spread misinformation, regardless of initial intention? We should avoid doing this as much as possible.

Tom concluded his section by asking a really important question: What does AI and the future possibilities mean for us as humans? Tom brought up how the adoption of AI technology across all aspects of society could start to alter human intelligence, wisdom and intuition. 

He pointed to the invention of technologies like GPS, which despite being incredibly useful, has eroded the human ability to navigate and direct ourselves using reference points. In the past, humans had an innate sense of direction and could often instinctively know which direction was north or south - in the West this ability has certainly declined.

We then started to discuss if an overreliance on AI eventually lead to a situation where we lose our ability to think critically, to process information and make our own conclusions. 

Throughout the discussion with the attendees in responses to our prompts, there were a few points made by participants that were particularly relevant that are worth mentioning:

  • One attendee made note of real life examples of AI doing positive things - one was a teacher who worked with lots of neurodivergent students, for whom AI tools have improved their academic performance by allowing them to reformat information into formats more accessible to them. Rather than detract from the school experience, it was allowing for greater engagement and passion for education! 

  • Mostly, in the discussions I was a part of, attendees settled on the concept that this technology is nuanced. We can utilise AI technology as a tool and co-create with it, or we can become reliant on their outputs and become lazy with our endeavours (lending itself to the argument that overreliance can reduce our ability to think). 

  • An opinion I saw repeatedly from attendees is that AI can’t be considered ethically neutral, as it is a tool that reinforces the systems and inequalities we currently have, which themselves aren’t ethically neutral. In practical terms, this can be seen most clearly through the biases we mention above (more information linked below).

  • However, almost everyone agreed that it is too late to reverse the development of AI technology. There is a need for legislation and regulation, but in the context of technological developments, how useful these will be at curtailing the negative impacts of the tools is uncertain. 

  • Ultimately, when answering the question of “how can we leverage AI for positive impact”, we weren’t sure! One person made a point that I’m still thinking about, which was that we already are using AI for positive impact. They argued that rather than obsess over the technology itself, we should look at the human aspect: how to change human behaviour and stop people using AI for meaningless purposes, and encourage more people to deploy it as a tool for good.

What do you think? This is a really difficult topic to tackle and things are developing every day! For more information, please take a look at the reading list below, where I’ve linked some information on topics covered above, or about the tensions between AI usage and positive impact! 

Reading List:

  • Books to learn more: Scary Smart by Mo Gawdat, Co-Intelligence by Ethan Mollick. Cogs and Monsters by Diane Coyle

  • Podcast: How Green is my AI?

The Bad:

The Good:

In other news - we also organised and hosted two BBN Run Club events at Blue Earth - a scenic early morning 5K at a relaxed pace along the River Thames.

These were to help promote our charitable project, Project Salt Run, working with 1% For The Planet to raise money and awareness for those most affected by the climate crisis by our Founder, Hannah Cox, who will be running 4030KM across India. More details on this coming soon!

Thanks to everyone who came down, we hope you enjoyed the event as much as we did 🚀

Previous
Previous

Announcing the First Wave of Speakers at the Better Business Summit 2025

Next
Next

How Can We Influence Wider Change? Read our Industry Report…