CASE STUDIES

Proof in the work

Selected engagements from the Starro Labs R&D practice. Each one takes a hard technical bet, de-risks it before hardware is built, and hands the client a validated design ready for fabrication.

Optical Engineering

Built-in reading vision for high-resolution displays

Client
Confidential — consumer hardware
Service
Precision optical modeling & thin-film engineering
Outcome
Manufacturing-ready design; no physical prototyping required
+2.00 D
Vision correction delivered
< 1.5 mm
Total layer thickness
16 in
Crisp viewing distance
The challenge

Around age 40, most people start to lose the ability to focus on things close to their eyes. This condition is called presbyopia, and it makes high-resolution screens hard to read at arm's length. Users end up squinting, holding the device farther away, or reaching for reading glasses.

A consumer hardware client came to us with a question: could a clear, thin layer be designed that gives the eye built-in reading-strength correction (about +2.00 diopters), without making the device thicker, blurrier, or harder to use? The space available was under 1.5 mm of total thickness. Building physical prototypes to test the idea would take months and cost serious capital. The client wanted proof the design would work before that commitment.

The solution & deliverables

We built a virtual prototype, which is a digital model precise enough to predict real-world performance. The design centered on a multi-layer optical stack, a thin sandwich of clear layers that bend light in a controlled way to bring close-up text into focus.

We tested it using transient ray-tracing, a simulation method that follows millions of light rays as they pass through the stack. We adjusted the shape of each surface and the materials between layers. The result is a light path that gives the eye the focus help it needs. No glare. No ghosting. No blur at the edges.

Our model runs fast enough that we could compare hundreds of design variations in days, not months. The output was a complete, manufacturing-ready specification ready for the client's fabrication partner.

The impact

The virtual prototype proved retina-grade clarity at a 16-inch viewing distance, inside a stack thinner than 1.5 mm. Translation: users see the screen as if they had perfect vision, without ever noticing that an optical layer is there. The touch surface keeps working. The display still looks the way it was designed to.

The client skipped the slow, expensive cycle of building and rebuilding physical prototypes and moved straight to manufacturing specs. That likely saved several months of development and a meaningful chunk of capital. The result is a validated architecture ready to scale into a product that is both durable and invisible to the user.

Thermal Engineering

Validating multi-day heat storage before a single part was built

Client
Confidential — clean energy developer
Service
High-fidelity multiphysics simulation & thermofluid system design
Outcome
Validated thermal architecture; advanced directly to fabrication
1.10 kWh
Extractable energy yield
48 hr
Bad-weather buffer
4-day
Transient 3D simulation
The challenge

A clean energy developer was designing an off-grid heat appliance for places without reliable power. The system needed to capture solar energy during the day and store enough heat to be useful even after a stretch of bad weather. Not for hours, but for days.

The technical heart of the design was a sensible heat thermal energy storage (SHTES) system. That is a method of storing energy by heating a dense material and then releasing that heat slowly. The hard question: would the storage actually hold high-grade heat overnight when wind, temperature, and sunlight kept changing? Building a full prototype to find out would cost serious capital and consume months of the team's runway. They needed answers before the build.

The solution & deliverables

Standard heat-flow models break down under shifting environmental conditions, so we designed a 4-day transient 3D simulation. Transient means the model tracks the system minute by minute over time. 3D means we modeled the actual shape of the device, not a simplified version.

We used computational fluid dynamics (CFD) to simulate how heat and fluid move inside the system. We wrote custom boundary conditions. These are rules that update as outside weather changes. They let the simulation shift cleanly between daytime collection mode and nighttime storage mode, just like the real device would.

The model was lean enough to run hundreds of design variations without bottlenecks. The deliverable was a fully validated thermofluid design, ready to fabricate.

The impact

The simulation proved the system delivers 1.10 kWh of usable energy and holds enough heat to operate through a 48-hour weather buffer. In plain terms: the device keeps working through two full days of bad weather without losing useful heat.

By proving the design virtually, the client skipped a slow, expensive cycle of physical trial and error. They went directly to hardware fabrication with high confidence in the outcome. The validated architecture is now ready to scale into reliable off-grid energy delivery in remote communities.

Manufacturing Automation

Bonding photonic chips with light, not glue

Client
Confidential — photonic hardware developer
Service
Custom manufacturing automation & optomechanical engineering
Outcome
Repeatable laser-fusion workstation; epoxy removed from optical path
< 1.0 µm
Alignment stability
< 5 sec
Fusion time per channel
98%
Reduction in cycle time
The challenge

A photonic integrated circuit, or PIC, is a chip that routes information using beams of light instead of electricity. PICs sit at the heart of next-generation telecom, sensing, and computing hardware. But every chip needs an optical fiber attached to it, and the fiber has to land in exactly the right spot. The margin for error is sub-micron, which means smaller than a millionth of a meter. A human hair is about 70 of those across.

That step is called pigtailing, and it is one of the biggest bottlenecks in scaling photonic chip production. The standard fix is a specialty epoxy. The fiber is held perfectly still while the glue cures under heat or UV light, a cycle that takes 30 to 45 minutes per device. The glue then shrinks as it sets, which pulls the fiber off target. Over time, the epoxy slowly releases gas and degrades the chip in any application that demands real reliability.

A photonic hardware client came to us looking for a cleaner answer. The strongest option on paper is to fuse the fiber straight to the chip with a CO2 laser, removing glue from the picture entirely. The catch: laser fusion needs intense, localized heat. That heat warps the tooling holding the fiber in place, and the alignment drifts before the bond sets. The idea works on paper. Building the system that delivers it is the hard part.

The solution & deliverables

We designed and built a custom precision workstation from scratch. The architecture had to balance three demands at once: a mechanical structure tight enough to hold sub-micron alignment, an optical path clean enough to focus the laser without distortion, and a thermal management system able to absorb the laser's heat pulse without flexing.

The workstation pairs a high-resolution microscope with a tightly controlled CO2 laser beam. We built a custom optomechanical platform, the structural frame that holds and aligns the optics, and engineered it to isolate the rest of the system from the thermal pulse the laser produces. Every step happened in-house: mechanical drawings, optical path design, assembly, calibration.

The deliverable was a turnkey workstation, ready to run controlled glass-to-glass bonds on the production floor.

The impact

The new workstation gave the client a repeatable, manufacturing-grade way to attach fibers to photonic chips with no glue in the optical path. A 45-minute curing cycle was replaced by a fusion step that finishes in under 5 seconds. That is a 98% reduction in per-channel pigtailing time, with sub-micron alignment held through every bond.

Removing epoxy also opened up applications the client could not reach before. Glass-to-glass bonds survive in places epoxy fails: high temperatures, vacuum, and long missions in space and defense. The client moved from lab-scale assembly to industrial-scale production, with hardware now ready to ship into the toughest environments.

Hardware-Software Integration

Building an autonomous vehicle from sensor stack to software

Client
Confidential — autonomous vehicle startup
Service
Hardware selection, compute, supply chain, and foundation model development
Outcome
Validated sensor stack and ruggedized compute platform; production-ready driving model
3 modalities
IMU, radar, camera validated
40+ suppliers
Supply chain qualified
< 6 months
Stack to field deployment
The challenge

An autonomous vehicle is less a single product than a system of systems. Getting one to work means solving a sensor problem, a compute problem, a software problem, and a supply chain problem at the same time, with each decision shaping the ones that follow. For an early-stage startup, that is a lot to cover with a small team.

This client had a clear direction but needed help executing across the full hardware-software stack. Sensor selection alone carries real consequences. Pick the wrong IMU (an inertial measurement unit, the chip that tracks how the vehicle is moving) or the wrong radar early, and you are locked into tradeoffs that are expensive to undo. The team needed someone who could move fast on component decisions without skipping the validation work, then carry that through to a working driving model and on-road testing.

The solution & deliverables

We started at the component level. Testing covered IMUs, radar units, and cameras against the vehicle's performance requirements: vibration tolerance, range, field of view, latency, and how each sensor would integrate with the others. From that process we landed on a sensor suite and a ruggedized NVIDIA compute platform that could run the perception and planning software in real-world driving conditions.

With the hardware settled, we built the driving foundation model. A foundation model is the AI brain that takes raw sensor data and turns it into driving decisions. Rather than adapting a generic architecture, we designed the training pipeline around the actual sensor configuration. Open-source model weights where they fit, custom components where they did not. The goal was a model that worked with the client's specific sensors, not one that needed patches to account for them later.

Supply chain work ran alongside the technical build: qualifying suppliers, locking in component sources, and getting the procurement infrastructure in place so the client could move from prototype quantities toward production without rebuilding that side of the business from scratch.

The impact

When the client reached their first deployments, they had a system that had been built and tested as a whole, not a set of individually developed pieces assembled at the end. Field testing confirmed the stack performed as intended.

The client went into deployment with validated hardware, a working foundation model, and a supply chain ready to support what came next.

Embedded AI & Security

Facial recognition and hardware keys for office laptops

Client
Confidential — corporate facilities & IT security
Service
Embedded AI security with liveness detection and hardware authentication
Outcome
Deployed facial recognition with liveness detection; hardware key and timeout protocol layered on top
< 2 sec
Authentication response time
99.3%
Liveness detection accuracy
~0%
Unauthorized access rate
The challenge

Physical office security is a solved problem until visitors are involved. Badge readers, reception check-ins, and locked server rooms handle most of it. But the laptops sitting on desks in open work areas are a different story. A visitor who wanders a few feet past where they should be can sit down at an unattended machine and see whatever was left open on screen. For most companies, the answer is a screen lock on a timer. That is a low bar.

This client ran a facility with regular visitor traffic moving through areas where staff laptops were in use. Standard operating system screen locks were not cutting it. The real issue was that the system had no way of knowing who was sitting there. A returning user and an unauthorized visitor looked identical to the software.

The solution & deliverables

We built a layered authentication system around a computer vision model with real-time liveness detection. Liveness detection is the part that confirms a real, live person is in front of the camera, not a photo or a video being held up. Without it, a printed picture is enough to fool most face-based systems.

The core function is straightforward. The system watches the device's camera feed and identifies who is in front of the screen. If the face matches the assigned user, the session stays active. If it sees someone else, or detects a spoofing attempt, the machine locks immediately.

On top of the vision layer, we added a physical hardware security key paired with a password. The key has to be physically present and inserted by the user. That combination (something you are, something you have, something you know) meant that even a perfect facial lookalike could not get through without the hardware token.

The system also runs on a timeout protocol tuned to risk level. If a short period passes without the correct user in frame, the device asks for the hardware key again. If more time elapses, the password is required as well. This gave the client a way to calibrate friction based on how long a device had been unattended, rather than treating a 30-second absence the same as a 10-minute one.

The impact

After deployment, the client had a security setup that behaved more like a physical access control system than a standard screen lock. Unauthorized users could not get into a device just by waiting for a coffee run. The timeout tiers gave IT a flexible policy they could adjust without touching the core system. And the liveness detection meant authentication could not be defeated with a photo.

The hardware key requirement added a layer that purely software-based systems cannot replicate. You cannot remotely clone a physical token. For a facility with visitor traffic and high-value data on those machines, that distinction matters.

Multi-Discipline Engineering

Science kits that plug into the toys kids already own

Client
Confidential — consumer education hardware
Service
Multi-discipline engineering for interactive science kit development
Outcome
Functional demonstration units; manufacturing-ready kit components
5+
Experiment types delivered
3 fields
Engineering disciplines deployed
1 brand
Toy integration target
The challenge

A consumer education startup was building science kits for kids. The clever part: each kit was designed to plug into toys families already had on the shelf, instead of asking them to buy a whole new system. The engineering had to satisfy two constraints at once. Simple enough for a child to use. Precise enough to connect reliably with third-party products the team had no control over.

The kit lineup covered a wide range of physics: optics, liquid gallium, microcontroller-driven motors, and thermal expansion. Each experiment needs different materials, tolerances, and safety considerations. Developing all of them in parallel, while keeping each one self-contained and coherent, was not a small lift for an early-stage team.

The startup did not have the in-house engineering depth to prototype across that many disciplines at once. Optics modeling, embedded electronics, and materials handling are each a specialty in their own right. Trying to hire for all three fast enough to hit development milestones would have taken most of a year.

The solution & deliverables

We came in across all the disciplines at once. Optical systems design for the optics experiments. Embedded systems engineering for the microcontroller and motor integrations. Mechanical and thermal engineering for the gallium and thermal expansion components.

For the optics work, we designed and built demonstration parts that show how light behaves in a hands-on, child-safe format. The liquid gallium experiments required careful materials handling. Gallium is non-toxic but has unusual physical properties. It melts just above room temperature, which makes it an ideal teaching tool for phase change and required purpose-built containment and handling parts. The microcontroller integrations involved programming embedded logic to drive motors in ways that made the underlying physics legible to a young user. Thermal expansion demonstrations were prototyped with materials chosen for clear, visible results at safe temperatures.

The deliverables were physical demonstration units and the component designs needed to move into kit production.

The impact

The client came out of the engagement with working demonstrations across five experiment categories and a clear path to production for each. Experiments that could have taken separate specialist hires months to prototype were developed in parallel, on a timeline that matched the startup's roadmap.

The toy-brand integration constraint held throughout. Each kit component was designed to work with existing products rather than replace them, which kept the original value proposition intact. The client is now positioned to expand the kit lineup, and we are continuing to support that buildout.

Computer Vision & AI

Computer vision weapons detection that runs at the edge

Client
Confidential — physical security technology
Service
Computer vision model development and cloud backend engineering
Outcome
Trained and deployed edge AI model; cloud backend integrated with existing security software
< 90 ms
Edge inference latency
3 classes
Weapon types detected
5+ clients
Deployed in the field
The challenge

The client builds weapons detection systems for physical security operations. Their customers are security teams that need automated threat alerts at the edge: cameras and compute running on-site, with no assumption of reliable cloud connectivity. The company had already picked their hardware stack, which was the right call, but it meant the software had to fit a fixed compute envelope. There was no room to swap in a bigger GPU if the model turned out too heavy. No fallback if inference was too slow.

The gap was on the AI and software side. A computer vision model that detects weapons reliably in real security environments — variable lighting, crowded scenes, partial occlusions — is not a generic problem. It needs careful data curation, model architecture choices tuned to the compute limits, and a training process that holds up under the conditions the system will actually face. The client also needed a cloud backend to connect their edge devices to their customers' existing security software, none of which had been built yet.

The solution & deliverables

We were brought in to own both the AI and the backend. On the model side, we started with the data problem: sourcing, curating, and annotating training data that reflected the real conditions these systems would operate in. From there we built and trained a computer vision model, then went through multiple rounds of fine-tuning to hit the accuracy and latency targets required for edge deployment on the client's existing compute platform.

The cloud backend connects the edge devices to the client's customer-facing security software. We designed the API layer (the bridge between systems) to handle device communication reliably, route detection events in real time, and integrate cleanly with the security platforms their clients already had in place. The architecture had to hold up under live security operations without requiring the client to overhaul their customers' existing setups.

The impact

The system is deployed in the field with the client's customers. The model runs at the edge on the compute stack the client had already committed to, meets the latency requirements for real-time detection, and feeds events to the cloud backend without gaps.

The client went from a hardware-ready but software-incomplete product to a fully deployed system inside a single engagement. Their customers have a weapons detection capability that works inside their existing security software, with no rip-and-replace required on the infrastructure side.

Build The Future With Starro Labs

Let’s talk. Whether you’re ready to start a project or just exploring options, we’re here to help.

Build The Future With Starro Labs

Let’s talk. Whether you’re ready to start a project or just exploring options, we’re here to help.

Build The Future With Starro Labs

Let’s talk. Whether you’re ready to start a project or just exploring options, we’re here to help.