AI Data Centers, And Future Innovations: Dinis Guarda Interviews Larry Yang, Chief Product Officer At Phononic

Table of Contents
    Add a header to begin generating the table of contents

    In the latest episode of the Dinis Guarda Podcast, Larry Yang, Chief Product Officer at Phononic, explores Phononic’s sustainable cooling technologies, the evolution of AI data centres, and the future of innovation and scaling in hardware and software industries. The podcast is powered by Businessabc.net, Citiesabc.com, Wisdomia.ai, and Sportsabc.org.

    Dinis Guarda Interviews Larry Yang, Chief Product Officer At Phononic

    Larry Yang is a technology leader with an engineering background.  He is currently the Chief Product Officer at Phononic, a company that offers solid-state cooling innovations, delivering high-performance energy-efficient solutions for data centres, cold chain, eGrocery, pharma, and HVAC.

    During the interview, Larry discusses the evolution of consumer products and technology:

    “Motor technology has evolved incredibly over a hundred years and has met the demands of consumers. Computers used to fill a room, and now you have multiple consumer computers all around you. Refrigeration is now at that moment. The old vapour compressor is at a crossroads.

    One is the hydrofluorocarbon refrigerant problem. It is now recognised to be a toxic material that has to be eliminated from our use. We will eliminate hydrofluorocarbons to save the planet, and that’s not going to be reversed.

    Sustainability is not enough. You also have to solve an actual consumer problem. Right now, there’s only one place you can keep things that need cooling, and that’s in your kitchen refrigerator.

    Wouldn’t it be great if you could modularise refrigeration and just push it out into all the different corners of the house?”

    AI data centres and fiber optic networks

    Larry discusses Phononic’s innovative cooling solutions for high-performance computing and networking, focusing on data centres and the challenges faced in cooling advanced AI processors:

    “At the heart of AI or cloud computing or even blockchain mining is a computer. You also have to connect many processors together, and so the network is also very important to bring them together.

    Most of the data that’s being sent around the world, even between or inside data centers, are sent on fiber optic cables.

    Fiber optic cables are strands of glass with lasers that sit on the ends. The lasers are sending the data.

    When a laser is sending data faster and faster, it gets hot, it gets out of spec, and it gets literally out of tune because it’s no longer sending at the right frequency.

    These little chips I was showing you earlier they sit on the ends of fiber optic cables. Our chip keeps the laser cool and keeps the laser playing in tune.

    We’ve been doing this for about eight years now and have shipped over 30 million devices that cool these lasers.

    That number is growing because as data rates get higher and higher, the demand for lasers to go faster and faster is ever-increasing.

    Our solution is in all the major hyperscalers. We’re in Google, Amazon, Microsoft, Meta, etc.

    We’re at the ends of fiber optic cables that go all the way to your central office, to your telco central office, your ISP server office. It then goes at the ends of cables that go to your home or your office. We’re also in cellular towers.”

    As data rates get higher and higher, the demand for lasers to go faster and faster is ever increasing.

    Phononic’s cooling technology ensures these lasers stay cool and operate at optimal performance, which is crucial for high-speed data transmission.”

    High-performance chips innovation

    Larry shares insights about his involvement in the Hot Chip Symposium, a conference focused on high-performance chip innovations:

    “I’ve been involved with the Hot Chip Symposium now for several years. The conference itself is over 35 years old. I attended the first couple of them and then was involved off and on. We like to pride ourselves on being a conference for engineers put on by engineers.

    It’s a 100% volunteer team that not only puts the program together, but we also run the production, we run the catering, we bring the ice cream, and we deal with parking. So, it’s very volunteer-based.

    It’s made up of these very successful computer architects who have worked at Intel, AMD, IBM, Meta, Google, all the big players. These are engineers who really believe in their work, but more importantly, believe in the importance of sharing the work between our communities so that we can build on all of our ideas and continue to innovate.”

    Talking about the evolution of chip architecture, Larry says:

    “At the beginning, it was all about processors for personal computers. That was kind of the thing at the time, and then computer workstations. Single chip processors morphed into multi-core processors as Moore’s law moved on, we can pack more actual processor cores onto a piece of silicon.

    We start bringing other parts of the system onto the silicon. “I remember a talk from Nvidia, it was probably Jensen who gave this talk, maybe 15 years ago, about how the GPU was a game-changer as a general-purpose computer.

    We like to carve out some space for non-AI topics; otherwise, it would just become an AI show. So we’ll have embedded processors, network processors, and sometimes interesting topics on programmable processors or software papers, even a thermal management paper. It’s going to be another great conference.

    It’s probably going to be mostly AI, probably three-quarters AI, but we like to keep some balance and have non-AI innovations as well. It’s a fascinating time for chip innovation, and AI is definitely driving a large portion of the developments in space.”

    Concluding the interview, Dinis and Larry discuss the future innovation and industry outlook:

    “It’s not enough just to have the one good idea and then solve that one problem. If you really want to make an impact, you have to figure out how to scale that solution. There are a number of different ways you can do that; building a software platform or ecosystem would be the right way to do it for building a software product.

    When you’re building a hardware product, we cannot build every single product that can use our technology. We have our little facility in Durham, North Carolina, and we just can’t do that.

    We’ve adopted a licensing model now where we will build the thermoelectric chips, not just the chips but the whole subsystem around it, the heat exchangers, the control software, the cloud software and we will license that to different companies. We will license them the design. Here are your design rights. You can modify it however you want.

    When I was at Google we had an expression: never bet against bandwidth. We knew that if you sped up a search result, billions of revenue go up, right? There’s a very clear correlation between speed and revenue.