Neural networks and their effect on test and measurement [Q&A]


Historically test and measurement has been simply about collecting data and exporting it for later analysis. Now though neural networks make it possible to carry out the analysis in real time.
We spoke to Daniel Shaddock, CEO of Liquid Instruments, to find out more about what this means for businesses.
BN: How are neural networks redefining test and measurement?
DS: For decades, test and measurement instrumentation has been fundamentally reactive. Engineers set up a test, capture data, and export it to a workstation for analysis. This approach -- rooted in the constraints of legacy hardware -- has shaped how labs and factories operate. But it’s rapidly becoming obsolete.
The next frontier in test and measurement is in-line intelligence: instruments that can not only acquire signals but interpret and act on them in real time. This is now possible thanks to a convergence of two key technologies -- neural networks and reconfigurable, software-defined hardware.
At Liquid Instruments, we’ve taken a major step in that direction with the Moku Neural Network. By enabling real-time neural inference directly on reconfigurable, FPGA-based (Field Programable Gate Array) hardware, we’re empowering scientists and engineers to bring machine learning into the loop -- without the complexity that usually comes with deploying AI on embedded platforms.
BN: Why do embedded neural networks matter?
DS: Neural networks are not new. They’ve transformed industries from finance to language translation. But in test and measurement, adoption has lagged -- not because of a lack of interest, but because of integration friction.
Running a neural network on a PC after collecting data introduces latency and bandwidth limitations. Worse, it introduces a manual bottleneck in processes that could be fully automated. For many labs, this means missed anomalies, slower experimental feedback, or a complete inability to implement closed-loop control.
By moving neural inference onto the instrument itself -- at the edge, directly on the FPGA -- we remove these constraints. Instruments no longer just record signals. They can classify waveforms, filter intelligently, detect patterns, and make decisions within microseconds of acquisition.
BN: How does FPGA technology lower the barrier to entry?
DS: Historically, deploying neural networks on FPGAs required deep expertise in VHDL, Verilog, or other low-level hardware design tools. This barrier effectively locked out most scientists and engineers, forcing them to rely on software-only solutions with limited real-time performance.
The Moku Neural Network eliminates that complexity. Users define and train their models using standard Python libraries like TensorFlow or PyTorch. Once trained, they can deploy those models directly to their Moku hardware -- no VHDL required. This means any lab with Python skills can now implement real-time ML-powered feedback and control systems on an FPGA.
BN: How do neural networks deliver real-world impact?
DS: This isn’t about theoretical capability -- it’s already changing how people work.
In optics labs, researchers are using neural inference for adaptive filtering and beam stabilization without relying on external controllers. In RF systems, neural networks detect and classify waveforms faster than traditional FFT-based approaches. In manufacturing environments, the same approach enables smarter end-of-line testing and adaptive quality control.
Because the models run directly on the FPGA, inference happens in hardware-level real time -- without PC round-trips, buffering delays, or driver dependencies.
BN: What do you think will come next?
DS: As FPGAs become more powerful and flexible, we expect to see even deeper integration of AI at the edge. More complex models -- including convolutional or recurrent architectures -- will run on hardware, enabling applications like fault prediction, signal deconvolution, and even autonomous calibration.
Equally important, the workflow for deploying these models is being radically simplified. Just as graphical programming environments once democratized DSP, toolchains like Moku Cloud Compile and Python-based neural frameworks are democratizing embedded AI.
Looking further ahead, we believe smart instruments will soon become the default. In the same way modern scopes now include advanced math channels and triggering, the next generation will include trainable neural inference engines as standard -- just another tool in the kit.
The tools we use to explore the physical world are about to change dramatically. When instruments can understand the data they collect -- not just capture it -- we unlock new possibilities for speed, precision, and autonomy.
At Liquid Instruments, we’re excited to help lead that shift. Neural networks aren’t just for the cloud anymore. They’re coming to the front panel.
Image credit: Gemini