By Brentan Alexander, CSO & COO, New Energy Risk

The latest entry in my series on resolutions for technology innovators: test your tech in the real-world! Having spent a number of years in academic and early-stage R&D, I understand the desire to use highly controlled conditions and parameters when putting together testing programs. When you are working to understand fundamental aspects of your technology to allow for refinements and improvements, it is necessary to inject as few variables as possible in your testing program so that you can reliably link changes in performance to control parameters of interest.

However when moving out of the development stage and in to the commercialization of your new technology, this controlled approach to testing is insufficient. At New Energy Risk, I regularly review testing data from companies and innovators who misunderstand what types of data that I, and by extension the broader debt and growth capital investment market, want to see. You are trying to sell your equipment or build a project; having an understanding of the science underpinning your process is table-stakes. You better have that understanding, or you aren’t even making it through the door. What I (and other capital providers) want to see is that you can demonstrate the engineered system works as advertised.


When we first meet, I will assume that the science that underpins your technology is well understood. Showing me the controlled experiments that elucidate the interplay between operating conditions and performance are necessary so I can validate that assumption, but they miss the point of what I’m really after: I want test data that demonstrates the ability of the technology to reliably operate over the full operational window for the expected life of the technology.

Using super-refined, ultra-pure feedstock for a new kind of biomass facility does not demonstrate that a technology will work on the dirtier and less consistent biomass available for a commercial plant. Validating a predictive algorithm on the very data that was used to train the system, even if only a subset of the validation dataset was used in training, provides little guidance on whether the algorithm will perform on the wide variety of datasets likely to be encountered in the field. Running controlled charge/discharge tests on a battery in a specially conditioned laboratory does not validate that the battery will perform under variable loading conditions in the desert sun.


If you’re working to shield your device or equipment from real-world conditions that could significantly undercut performance of the technology, then you probably aren’t ready for commercialization. Take the kid gloves off, because no customer wants to be a guinea pig, and nearly all of them will see a lack of real-world testing data as a lack of readiness and seriousness.

So how do you operate a demonstration test useful to the finance community? Stop trying to control things and let go of the handlebars. Send devices outside, give them to potential clients or partners, and let them control the asset. Is your technology sensitive to feedstock quality? Buy the low quality stuff for an extended test. Does temperature impact efficacy? Send one to the desert and another to Alaska. Vary loads, feedstock parameters, or any other controlling conditions throughout the test, or let nature randomize it for you.

Anything short of this, and you’ll find yourself with customers, partners, or capital providers asking for more validation before starting a relationship.