
Trading Tomorrow - Navigating Trends in Capital Markets
Welcome to the fascinating world of 'Trading Tomorrow - Navigating Trends in Capital Markets,' where finance, cutting-edge technology, and foresight intersect. In each episode, we embark on a journey to unravel the latest trends propelling the finance industry into the future. Join us as we dissect how technological advancements and market trends unite, shaping the strategies that businesses, investors, and financial experts rely on.
From the inner workings of AI and ML to the transformative power of blockchain technology, our host, James Jockle of Numerix, will guide you through captivating conversations with visionaries who are not only observing the future but actively shaping it.
Trading Tomorrow - Navigating Trends in Capital Markets
The Power Behind AI: Off-Grid, Zero-Water Data Centers Are Here
As AI demands skyrocket, the true bottleneck may not be models or chips—but power. In this episode, Yuval Bachar, the founder and CEO of EdgeCloudLink, unveils the infrastructure crisis behind AI. But could hydrogen-powered, off-grid, zero-water data centers be the solution? Find out during this episode of Trading Tomorrow – Navigating Trends in Capital Markets with Jim Jockle.
Welcome to Trading Tomorrow Navigating Trends in Capital Markets the podcast where we deep dive into technologies reshaping the world of capital markets. I'm your host, jim Jockle, a veteran of the finance industry with a passion for the complexities of financial technologies and market trends. In each episode, we'll explore the cutting-edge trends, tools and strategies driving today's financial landscapes and paving the way for the future. With the finance industry at a pivotal point, influenced by groundbreaking innovations, it's more crucial than ever to understand how these technological advancements interact with market dynamics. Today, we're diving into a topic that is at the very foundation of AI's rapid growth data centers.
Jim:As AI workloads intensify, so does the demand for energy and advanced computing infrastructure. How can the industry scale without overwhelming power grids, and how does sustainability and AI innovation intersect? To answer these questions, we're joined by Yuval Bakar, the founder and CEO of Edge Cloud Link or ECL, a pioneering company revolutionizing AI data centers with sustainable off-grid solutions. Yuval's career spans leadership roles at Microsoft, linkedin, Facebook, cisco and Juniper Networks. With a focus on sustainability, he's leading the charge in building carbon-free AI data centers. Yuval is also a recognized leader in digital infrastructure innovation, with eight US patents and a deep understanding of data center efficiency, compute density and next generation cooling solutions. Today, he's here to share his insights on how AI infrastructure is evolving to meet the technology's insatiable demands without burning through resources.
Guest:Thank you for having me. It's a pleasure and an honor to be here.
Jim:Well, you know, perhaps we could start by just explaining what makes AI data centers so different from traditional data centers in terms of energy consumption, infrastructure, I'm sure, water cooling, the whole thing. I'd love to get your perspective.
Guest:Yeah, I think when we refer to AI data centers, I think initially we are referring to data centers which are targeting the LLMs the training part of it. There's a completely different discussion we should have probably about the inference data centers, but let's start with the LLMs the training part of it. There's a completely different discussion we should have probably about the inference data centers, but let's start with the LLM ones. On the training data centers, the phenomena we're seeing is tremendous multiplier on the density of the racks and the need to put a very large quantity of hardware into a very small space. That is challenging most of the data centers which are out there in the world today, because the data centers up until two years ago in the US were averaging eight kilowatts per rack and we reached a point today that we are at 150. And if you listen to Jensen from Monday, we are targeting 600 in the next two years, 600 kilowatts. So that's almost 100x on the average which was there just three years ago. That is the major challenge to any data center to be able to do that. The secondary challenge, beyond the density, is the ability to operate with liquid in the data center. Liquid cooling initially was actually used only for liquid cooling-based air, but now we have a significant level of liquid cooling which requires direct-on-chip liquid cooling. Direct-on-chip liquid cooling forces us to bring very high capacity of water at very high pressure and flow into the data center to feed elements which do very high level of cooling into the chips themselves, not the room, but the chip itself. That requires infrastructure of water, which again does not exist in most traditional data centers and require specialty data center to be built like that.
Guest:The third thing, which is actually probably one of the biggest problems that we have today, and it's not necessarily related to the data center itself, it's actually the surrounding infrastructure, since these are very large sites and require very large quantities of power. We just ran out of power so we don't have power to actually feed those systems and data centers coming from the grid. The length of the time to actually increase the capacity of the grid and the availability of power for those data centers is in between three and five or 10 years in some cases, and the problem that we have today is that we have data center requirements to grow through the system very quickly and we cannot have enough power in the grid. The secondary challenge that happened to us is data center build-out time, which is always traditionally was between three and four years, is not matching the technology build-out time. The technologies are changing every nine to 12 months. Between three and four years is not matching the technology build-out time. The technologies are changing every nine to 12 months.
Guest:Nvidia is delivering a new platform every nine to 12 months and our cycle to build used to be three to four years, which means that it's about four to five cycles of technology. That means that if you start a project today, you cannot actually be able to deliver something that will be addressing Gen 5 from now, because nobody knows how Gen 5 from now is going to look like. So the cycle of data center build-out is actually an impeding aspect of what we do. What a lot of people do today they over-design basically. So they say, okay, I have no idea what I'm going to need, so I'm going to design the maximum I can today with today's technology and hope that in three or four years, when I deliver the data center, it will address the needs.
Guest:That is a huge risk to the people who build data centers right now, and it doesn't look like NVIDIA is slowing down. So, on the contrary, it looks like they're accelerating the processes, which means that we have to go through a very, very fast cycle right now, and all of this is a challenge for the AI data centers. So, if I summarize, very high density liquid to the data center racks, which are pushing the 150 today, will be pushing hundreds of kilowatts per rack in the future. Power delivery to those racks, power delivery to the data centers themselves, and being able to sustain all of this at high availability All of this together creates a major challenge in the data centers and make them special.
Jim:Well, in terms of just the resources alone, we've seen Google making investments into small nuclear reactors. But just even the availability of water has to come into play in terms of where these data centers can be located. 100%.
Guest:The water, while the complexity is to bring the water into the data center itself. There is a complexity in how many gallons of water can you use in the community you're around. Those communities in some cases do not comprehend how big is the consumption of water can be. It can be hundreds of millions of gallons per month for water consumed which is taken from the environment around you. So, unless you're in an environment where you're rich on water or there's excess water, where you're rich on water or there's excess water, you're actually taking the water from the communities around you. If you operate in a standard environment.
Jim:So let me ask a question here, just because I don't know the answer, and I've wondered this Is it freshwater or saltwater something that can be utilized?
Guest:You have to use freshwater. Saltwater is way too corrosive. So if you use saltwater you have to actually run the decination on it to actually deliver clear water with no salts in it. The salt is very corrosive to the systems and will not allow the system to operate.
Jim:Given those kinds of constraints of power and water and the impact on communities, a lot of state regulators, at least here in the US or countries, are probably imposing a lot of regulation as it relates to where these data centers can actually be built A hundred percent.
Guest:We got to the point that we're running out of power in a lot of places and we just can't place anything over there. The communities are not willing to accept the data centers from multiple levels. One is the power consumption, the water consumption, the air pollution that they're creating, the noise pollution they're creating and the way they look. Data centers are not very appealing buildings. If you saw them, they look like one big brick of concrete with no windows. Usually the data center is not going through some kind of architectural beautification cycle, so all of these are creating challenges in the communities.
Guest:The communities are trying to actually right now impose exactly what you're describing, and that's restrictions on how many data centers can come in, what kind of data center can come in and what's the implication for data center to come in. A lot of places now require for you to bring your power into the grid if you want to bring your data center to connect to the grid, and that's very complex because data center companies are just not power generators. They have not done power generation and they're looking for ways. Like you mentioned, google and others are looking for other ways to bring power into the grid to be able to enable to grow the data center footprint.
Jim:So now your company, ECL, is at the forefront of sustainable data center technology. What inspired you to create off-grid hydrogen-powered AI data centers? Hydrogen-powered AI data centers.
Guest:Yeah. So because of my history I used to work for hyperscalers for a long time and build data centers both in a hyperscaler environment as well as in a co-location environment and I got to the point that I saw that the drive in those large data center operators to create what's called carbon neutral data centers was not necessarily designing the data center better, but actually creating credits and offsetting the data center impact to the environment, and for me, it was always the wrong way to address the problem. The right way, in my view, was to actually build an actual zero emission data center which does not take water, and that led me to actually say okay to do that and create a disruptive data center in the world. We have to actually build a different kind of data center, and that's how ECL was born. Ecl was born from a perspective that let's prove and let's build a business model that shows that we can build a sustainable data center now, in 2024, 2025, and not in 2035 or 40, like others were claiming before and show that it's actually have a right business model.
Guest:Now, this happened pre-AI, when AI came into play. We actually created a perfect match into the data center requirement of AI because we had water in the data center and we're running off-grid and we're zero emission and zero water and all those things actually fail like a perfect match into what AI is requiring us to do. But the motivation was to create a sustainable data center with zero emission, zero water from the first moment and make it very high end, and that will enable people to actually operate at the level that the hyperscalers operate, even if you're not a hyperscaler.
Jim:So what is MV1 in Mountain View and why is it being called a groundbreaking facility?
Guest:So MV1 was the first implementation of the Sustainable Data Center. The ECL Data Center is a modular data center. So what does it mean? Modular? We build fixed blocks of 1 to 1.5 megawatts. It's a structure it's not a container or anything like that which is running completely off-grid. So it's running a self-generation of energy and it's running at the same time very high-end cooling system and delivery to the data center for high density. Mv1 was the place where we actually developed all the hardware and delivered this platform, which is one of the fastest-growing platforms in the industry today and that's the high-end data center. It's called AI factories in some places it's called other names which enables very high density. And just the example for that is we just deployed the first two 150 KW with liquid cooling data centers eight weeks ago and they're running in production right now with our customer. Very few data centers in the world can actually do that. That's a very complex problem.
Guest:What we did in MV1, the first step is we said we're not going to attach ourselves to the grid over here. So we went and built a self-generation of energy with liquid hydrogen delivered to the side and running hydrogen-based power generation units with fuel cells. There was a breakthrough in power generation. It was a breakthrough in creating stationary power based on hydrogen and it was a breakthrough in actually changing the hydrogen business, because the hydrogen business was never in the business of doing stationary power. It was always for petrochemical and other industries. It was never serving power systems. We're the first ones who actually went and built a power system based on HydroGem and ran it for the last year Like two weeks ago it was the one year anniversary that the site did not go down running on HydroGem and to do that, we did a lot of things in changing the architecture of the data center, removing the diesel generators from our footprint, removing the UPS systems from our footprint, creating a new power architecture which is more adequate for the future of AI requirements and delivering that in a breakthrough time.
Guest:We built a site within less than a year and that's the first one. When you build the first one and you do engineering development on it while you're building it, one year to deliver. This is with no precedence. Now, it is the first small block, right, so the 1.5 meg block. But the huge advantage is when we scale to large sites and we announced a one gigawatt site in Texas in September. We just repeat the block many, many times. We're not doing any scale up, we only scale out, which reduces dramatically the risk into the technology, because the technology has been proven on one block and it's just repeating as many times as we need so that the technology is not critical in the path.
Guest:The second thing is because of that proof of concept that we have, we can accelerate the build-out and we can build data centers in a matter of nine months from the moment we get a PO to the moment we give the key to the customer at large scale, hundreds of megawatts. And the reason we can do it, part of it is the modularity that has been proven to be working in Mountain View. Mountain View, the MV1 site for us, was supposed to be a development site, but because it was such a high-end data center and was actually performing very, very well, we have paying customers. Right now it's moved to production. We didn't plan to move it to production, but it's moved to production and we have paying customers on site which are actually running their own business on the equipment in the data center, and that by itself is a huge achievement for us.
Jim:That's incredible, you know. You make me think. You know, like going back to, like you know, the classic Tesla type case study, right, the emerging technologies, but then again, now we need a whole network of gas stations or electric charging stations, which led to solar. What kind of technologies, as part of this, are emerging, by yourself or with your partners, that are benefiting other industries in terms of taking advantage of everything that you're doing?
Guest:Yeah, so the first step, we are making a significant change in the development of fuel cells for a stationary solution. Fuel cells have been in the market for a long time but they never worked really well for stationary. So we're actually changing the way fuel cells are being designed right now with our large partners, which are some of them are on the automotive side, some of them are coming from traditional fuchsia business. The second thing we're changing the hydrogen business models and the hydrogen operation of the suppliers of hydrogen. Today we are building around the pipeline, which is a pipeline which is existing in the Houston area in Texas and it's existing to actually support refineries and petrochemical and suddenly we come and bring a green, clean industry into the pipeline. We are changing the number of pipelines that are being built. We are changing the quantity of hydrogen which is being built and the type of hydrogen which is being produced. Hydrogen right now is mostly gray hydrogen, which is a relatively high carbon hydrogen. We are actually driving blue hydrogen, which is very low carbon, and definitely driving more green hydrogen production, which all of the businesses required. An off-taker who is going to use the hydrogen that you're producing? We are a very large off-taker and we offer that not only in the Houston area. We'll offer it in other places, another area that we have an impact.
Guest:For a long time, we made a huge investment in sustainable generation of energy, if it's solar or wind. The challenge with that energy is that it's unreliable One day it's there, tomorrow it's not there. There's six or seven hours a day that's actually active. We have to figure out how to move those into being a reliable solution. You see, I'll develop a platform which is actually taking these unreliable power and converting it into a reliable 24 7 365 solution, combining hydrogen high capacity storage, which is already built into our architecture, and the self-generation of power. This is a game-changing. This is a phase two for us that we will build data centers based on that, but that will enable us to put data center anywhere. Anywhere that there's a solar plant, anywhere there's a wind plant, we can actually put a combo hydrogen generation, hydrogen usage, direct feed behind the meter and large capacity storage solution for data centers at scale. And that is something that, again, was very difficult for data centers to do because of the availability of solar or wind power. With the high level of reliability that we need for data centers, we solved that problem and we will continue to build next to those areas. So that's an impact across multiple industries. It's impact across multiple developments.
Guest:And the last thing is community integration that we talked about. Ecl is coming into a development, into a community, investing multiple billions of dollars into the data center and requesting nothing from the community. Because we don't take the power from the grid, we do not take the water from the environment, because when we generate energy with hydrogen, the byproduct is water and we use that water to cool the data center and to every other water use that we have on site. So we don't take water from our community around us and just bring them development. And that's again a game changer in the integration with communities, because you don't come to the community anymore and say I'm going to take your resources, just accept that because I'm going to give you economical development.
Guest:We said, no, we will give you the economical development and we will not take any resources from you. And in some cases we even give you back water because we're producing, in some cases, more water than we actually need, and that is again a game changer in how you actually interact with the community. We also developed a community program, which is an education program that supports the communities that we're going to to generate jobs which are data center related. The data center world right now has one big shortage on top of energy and that's resources humans that actually will work and operate and develop and build the data centers. We are creating programs to actually create this workforce for us locally, so we don't need to bring people from the outside. We want to create local contribution to the economy and to the people that live in the community we come into.
Jim:You talked about the MV1, and perhaps you can talk a little bit about TerraSight, the TX1 project, and how are they different?
Guest:Yeah, so the TerraSight build-out, first of all, is in Texas, it's not in California, and the reason it's in Texas is that we have a pipeline access, actually multiple pipeline access into the site that we operate in. The site in Mountain View is a 1.25 megawatt block. The site in Texas is 1,000 of those. So it's a very, very large scale and we're capable of putting 1,000. We build them in phases. Of course, we're not going to build them in one shot, and that will actually be one of the biggest data centers in the US that are going to be built in the coming three years or four years. And the reason we can scale that way is the modularity as well as the availability of hydrogen into our environment. We have hydrogen in Texas and we are offsetting the hydrogen use that went into refineries to go into clean energy generation, which creates a huge partnership for us.
Guest:The second thing over there is we can address, with the site in Texas, very large customers. So from our perspective we can operate with very large customers that have a minimum requirement of 100, 150 megawatts per phase. That we couldn't do in Mountain View because Mountain View is very small in size and limited in resources. That's another thing which is very different, but except that the blocks look exactly the same as MV1. The blocks are just repeated.
Guest:The one interesting thing that we do with our customers is, since our phases are nine months to 12 months phases, we let them change the block on the fly, so they can change the block every nine to 12 months, and that's a huge difference than what we have in other places and other companies can actually offer. So every nine to 12 months our customers define a new spec because there's new technology that comes in there. That's a very unique way to build a data center, because you build it in incremental way and not building one large campus on the first day, and that's something that we can offer in Texas as well. We do have customers in Texas which are relatively large customers. We can't name them yet, but hopefully we'll be able to name them very soon in a press release.
Jim:We'll be waiting for that, definitely. So you know, obviously we're now at the point where quantum computing and AI are definitely pushing boundaries. I think if I sat here a year ago from today, the world is completely different. But you know what? Do you see the role of quantum computing in terms of potentially reducing energy demands in future data centers?
Guest:Quantum computing is. We can talk for a whole hour just on quantum computing, but quantum computing by itself as a technology is definitely a game changer. Itself as a technology is definitely a game changer. The big question mark right now and we see the first implementation of quantum computing the question is can they actually scale into large-scale operations in production? Because everything that we see until now is like experimentals and we're talking about three to five to seven years before we can actually see them in full production.
Guest:I think we can't really predict what will happen. It definitely will actually give us much higher power of capability for compute and do all kinds of things that we couldn't dream about doing before. The question is how quickly can you move it to production? And how quickly can you move it to production in traditional environments that it will scale? If you need a very special environment that requires tens of billions of dollars of investment just to make it work for a single unit, then that will not scale properly. We need to make sure that technology scales properly to be able to leverage what it does. But in my opinion, quantum computing is going to be the next wave after the GPU style workload that we create right now. I'm sure NVIDIA is looking at it on top of others which are working on that, even though NVIDIA is focusing right now 100% on high density GPUs to actually deliver the AI platforms.
Jim:So there's an ongoing debate about centralized versus decentralized AI computing. Do you think AI workloads will remain heavily tied to large data centers, or do you think we'll see more edge computing and distributed AI?
Guest:So I think for the learning stage of the models, you will need decentralized large environments and that's where the reference to AI factories, this is where you're going to create the models. You will need decentralized large environments and that's where the reference to AI factories, this is where you're going to create the models For inference, which is where you use it. When you type a question in Grok, for example, that's inference, because Grok does not do learning from the questions, it's actually just applying the. That will actually push, will be pushed to the edge in my opinion, and that will actually be pushed to the edge in my opinion, and will enable us to build smaller sites, like 10, 15 megawatts of sites which are closer to the endpoint, to enable to give faster responses to people. Not every application is sensitive into latency, but most people actually have a high level of sensitivity to get an answer quickly to the question they ask. They usually are not willing to wait.
Guest:So I think the next five years we will see a higher and higher deployment of inference sites, which are smaller sites, all over the place and again the limiting factor will be, like we see right now, is the availability of power, because in the major cities we see zero availability of power. So we don't even see small level of availability, we see zero availability and for any implementation they will try and go into those areas. There's got to be a solution. How do you actually solve 10, 15 sites like this in the middle of Manhattan or Chicago or any other place which is a large metropolitan? Where do you get the power from, Unfortunately, Yuval.
Jim:We've made it to the final question of this podcast. We call it the trend drop. It's like a desert island question. If you could only track or watch one trend in the development of AI and data centers, what would that be?
Guest:The one trend that I would track 100% is a trend which is how do we get power to enable the initial deployment of AI factories?
Guest:How do we get the power for that and how do we actually deploy it on time at the level of performance that we expect from the chip manufacturers?
Guest:And that is something that is going to make or break the whole AI phenomena. If we can't build the infrastructure for the AI applications and track the fast extremely I would say lightning fast path where the chips to do the technology is actually growing at an exponential level, if we can't build infrastructure for that, it will halt the effort completely. The secondary associated with that is how does the companies who actually leverage that AI actually have a business from that and make money? Because the bottom line, a lot of companies will not invest a lot of money if they can't see a business model that actually works and profitable. And right now a lot of people make speculation on investment into this market, buy a lot of equipment, build a lot of infrastructure, but only a handful of the companies is actually profitable. So we need to make sure that there is a path both on the business side for profitability as well as the infrastructure side to actually support that level of profitability and that level of growth in this market.
Jim:Yuval, thank you so much for your time, your insights. As we all look at AI, this is not an area we all tend to think about, so really, really illuminating.
Guest:Thank you, it was my pleasure. Thank you very much for having me.
Jim:Thanks so much for listening to today's episode and if you're enjoying Trading Tomorrow, navigating trends and capital markets, be sure to like, subscribe and share, and we'll see you next time.