Chestnut: AI Powered Baby Monitor

Published on 8 September 2020

The image features the word "chestnut" in white lowercase text against a vibrant green background. On the right side of the image, a sleek white baby monitor device is partially visible, showing just its curved edge against the green backdrop. The minimalist design and bright colour scheme create a fresh, modern aesthetic that appears to be branding for Chestnut Mobile's smart baby monitor product.

Startups are hard and hardware startups are even harder. That was the lesson we learned while building Chestnut, a smart baby monitor with genuinely promising technology that was gaining traction until COVID-19 hit. This story, like many others, deserves to be told - not just for closure, but for the lessons it offers other UK entrepreneurs.

The Problem Space

As any new parent knows, sleep deprivation is real, and baby monitors often contribute to the problem rather than solving it. Traditional monitors often wake parents unnecessarily for minor noises while potentially missing critical situations. This pain point was something Alex Taylor, Chestnut's founder, understood firsthand as a parent. The statistics were compelling: over 25% of mothers and more than 10% of fathers suffer from post-natal depression, with sleep deprivation being a significant contributing factor.

The market opportunity was substantial. With approximately 20 million baby monitors sold globally each year and the smart/video monitor segment growing at 12% annually, we were targeting a sizeable and expanding market. Industry projections pointed to the broader smart home market reaching $151 billion by 2024, with baby monitoring technology representing a significant component of this growth.

I initially joined as Fractional CTO before becoming a co-founder, and played a pivotal role in how Chestnut aimed to solve this universal problem through technology. We weren't just building another baby monitor – we were creating a sleep assistant that could intelligently determine when parents needed to be alerted and when they could continue resting.

Understanding Users

One of our first priorities was to deeply understand parental needs. We conducted extensive interviews with parents, including 2-hour sessions with five families who tested our initial concepts. The feedback was tremendously positive, with comments like "When can I get one?" and "Shut up and take my money!" This validated our approach to developing a monitor that would only wake parents when their intervention was truly needed.

A survey of 505 parents highlighted common complaints about existing baby monitors: false alarms, poor battery life, unreliable signals, and low-quality images. These insights shaped our product requirements and guided our technological development.

Initial Concept

Chestnut's vision went beyond simply improving existing monitors. We sought to create a comprehensive baby monitoring platform using:

  • Smartphone-grade technology and sensors - including HD cameras with night vision and radar technology to provide superior monitoring capabilities
  • On-device intelligence - multiple machine learning algorithms running on powerful processors to analyse data in real-time without relying on cloud processing
  • Autonomous response system - designed to help soothe babies back to sleep via gentle music and projected visuals, or to alert parents when intervention is necessary

Our platform was also conceived to integrate with a broader ecosystem, connecting parents with sleep training consultants and medical experts who could access relevant monitoring data. We included a library of sensory entertainment content specifically designed for infants and toddlers.

The whole concept was reimagined around what parents really need: a decent night's sleep, less worry, and confidence their child is safe. This parent-first approach set us apart from competitors who focused primarily on the monitoring hardware rather than the holistic experience.

The Prototyping Journey

The evolution of our prototype reflected our iterative approach to both hardware and software development. 

The initial proof-of-concept used two Android smartphones, one as the camera unit and one as the parent unit communicating via secure WebRTC. This allowed us to quickly test our core software algorithms without custom hardware.

As the concept proved viable, we migrated to a Raspberry Pi-based prototype that allowed us to integrate additional sensors and test more complex functionality. We initially used Android Things, which simplified hardware integration. However, Google's diminishing support for Android Things forced us to consider embedded Linux (Yocto) before ultimately settling on AOSP.

For streaming video, we implemented a custom WebRTC stack with modified audio and video drivers to allow simultaneous access to streams for both transmission and ML processing. This required developing custom audio and camera capture pipelines to maintain a shared buffer accessible to both the ML engines and the WebRTC framework.

It became clear that for cost efficiency and to keep our bill of materials (BoM) manageable, a more focused sensor approach was necessary. While we had initially planned to incorporate a thermal camera and Time-of-Flight sensors, these were ultimately dropped to control costs. Instead, we prioritised a directional array microphone that provided better value for our core functionality. 


A prototype circuit board or sensor panel for the Chestnut baby monitor system. The black rectangular panel has various sensors labeled with white stickers: "THERMAL" at the top, followed by "HD NIGHT VISION", "TIME OF FLIGHT (MOVEMENT)", "ENVIRONMENTAL (CO2, CO, VOC)", and "DOPPLER RADAR" at the bottom. Several camera lenses, sensors, and mounting screws are visible on the panel. This appears to be a development prototype showcasing the multiple sensing technologies integrated into the baby monitoring device.

Raspberry Pi-based prototype with multiple sensors running Android.

Custom AI algorithms 

As Fractional CTO, I led the development of specialized machine learning models and intelligent detection systems:

  • Crying Detection: We implemented CNN-based audio classification using TensorFlow Lite to differentiate crying types. Initial experiments used Python with Librosa for MFCC feature extraction, converting audio spectrograms into image classification problems. While our proof-of-concept achieved acceptable accuracy, we struggled with the limited dataset (340 samples). We transitioned to an adapted TensorFlow model that could perform inference in real-time on an embedded device.
  • Baby Presence Detection: Using transfer learning on MobileNet SSD architecture, we fine-tuned using a custom dataset of 2,000 baby images with bounding box annotations. The model was retrained to detect both humans (leveraging COCO dataset's 66,808 person images) and babies specifically. False positives were initially problematic, particularly with beds and soft furnishings, requiring dataset augmentation with negative examples of teddy bears, chairs, couches, and beds.
  • Vital Detection: We experimented with high-band radar (24GHz Infineon Sense2Go) to detect micro-movements associated with breathing and heart rate. The algorithm applied FFT and peak detection over a sliding window of radar samples, identifying periodic motion within specific frequency ranges. While promising, this work required further refinement for reliable vital sign detection.
  • Sensor Fusion: A critical component of our approach was fusing multiple unreliable signals (audio, visual, motion, thermal) to produce more reliable event detection, built around a confidence-based propagation model.

Progress, Patents & Partnerships

The project gained significant momentum during 2019. We assembled a strong team including design specialists, software engineers, data scientists and operational experts. Industrial design progressed from digital renderings to physical models that allowed us to evaluate ergonomics and usability.

Intellectual property protection was a priority from the outset. We secured patent applications, registered design rights across multiple territories and trademarked the Chestnut brand. We were particularly careful to extend design protection to Asian markets, knowing we'd likely manufacture there and recognising the potential risk of copycat products. With support from the Innovate2Succeed programme, we commissioned a comprehensive IP audit that provided valuable guidance on maximising our protection and developing a robust long-term strategy.


We even had a partnership with Ubenwa, a company specialising in analysing baby cries to detect serious medical conditions such as asphyxia and brain injuries through acoustic biomarkers. Their technology complemented our approach, as they could identify early signs of potentially fatal anomalies by processing the crying data we collected.



On the left, a white baby monitor camera unit with a modern, sculptural design. The device has a curved teardrop shape with a dark camera lens area visible on one side. It sits on a circular base, creating an elegant profile suitable for a nursery environment.On the right side, a baby monitor projector unit shown in operation, casting light upward in a dark room. The white device has a minimalist design with the projection element angled upward to display content on a ceiling or wall, creating a purple-tinted illumination effect.

Final design renders of the Chestnut camera showcasing both monitoring and projection features.


The COVID Curveball

By early 2020, we had reached a critical juncture. Our bill of materials (BoM) was ready, and we were preparing for a small production run in Taiwan to validate the hardware before proceeding to clinical trials and a crowdfunding campaign. We also had an Innovate UK grant application in the works. 

Then COVID-19 changed everything. Like many hardware startups in early 2020, we faced unprecedented challenges with supply chains, manufacturing, and investment uncertainty. Despite having secured seed funding through angel investors and the SetSquared network, the project was ultimately mothballed.

Looking Back

Despite not reaching the market, Chestnut represented a genuine attempt to apply cutting-edge technology to a real problem affecting millions of families.

Failing is hard on founders and everyone involved. The emotional investment in a startup often exceeds the financial one, and shutting down a project you've poured your heart into is genuinely painful. 

In the UK tech ecosystem, we don't discuss failure as openly as our American counterparts, who often wear previous ventures – successful or not – as badges of honour and learning. There's wisdom in that approach. Each setback contains invaluable lessons that strengthen founders for future endeavours.

It takes a certain type of resilience to dust yourself off and go again. But that's precisely what drives innovation forward – not the absence of failure, but the courage to move beyond it. By sharing these stories openly, perhaps we can help build a more resilient UK startup culture where lessons from projects like Chestnut contribute to future successes.

If you're interested in reviving this project or learning more about the technology we developed, please drop me an email. Deck here.

UPDATE: Alex has since pivoted to work on innovative wine packaging so wishing him all the best. 


Building something great? Let's talk tech strategy

Email hello@mikesmales.com or use the contact page