Explainable AIWe need to understand why an AI has taken a decision, hence explainable AI. Do LLMs have it?Feb 317Feb 317
Published inBarnacle LabsUnlucky for some: 13 AI 🔮 predictions for 2024As we usher in 2024, I’m excited to share my Generative AI predictions for the year ahead. There’s 13 of them, so maybe I will be unlucky…Dec 31, 20231Dec 31, 20231
Published inBarnacle LabsWhy GPTs aren’t (yet) the new App StoreOpenAI just released GPTs, which are customised versions of ChatGPT.Nov 13, 20234Nov 13, 20234
Published inBarnacle LabsAn honest guide for innovators everywhereInnovation requires honesty and here I share some experiences and lessons from the trenches, slaying some big company myths in the process…Oct 13, 20237Oct 13, 20237
Published inBarnacle LabsNecessity is the mother of (GenAI) inventionA lot of innovation in the Generative AI space, especially with open source, is focused on making things smaller, cheaper, faster…Aug 8, 2023Aug 8, 2023
Published inBarnacle LabsBeyond Data Hoovering: The Nuanced Reality of Training Large Language Models (LLMs)Training Large Language Models (LLMs) is an evolving science. In this post I set out to shed some light on what’s involved.Jul 19, 20231Jul 19, 20231
Published inBarnacle LabsHow green is your machine learning?There is much work to do in greening ML workloads, but many reasons to be optimistic about the ability to dramatically increase efficiency.Jun 25, 2023Jun 25, 2023
Published inBarnacle LabsGPT-4 reasoningOne of the most impressive aspects of GPT-4 is its ability to reason. What does that mean for conversational systems and technology?May 19, 2023May 19, 2023
Published inBarnacle LabsShould you be worried about information security in the age ofApr 9, 2023Apr 9, 2023
Published inBarnacle LabsTraining a Large Language Model on your contentBy exploiting embeddings, vector stores and prompt templates, we can coerce models like ChatGPT to work on our own private content…Mar 20, 20232Mar 20, 20232