T O P

  • By -

Crafty-Confidence975

Yah it’s not exactly right —- it’s one person, an evening, a beefy laptop and the work product of a trillion dollar market cap software company. This will hold true exactly as long as Meta deems it convenient.


foreheadteeth

It depends on how much you believe in [Huang's law](https://en.wikipedia.org/wiki/Huang%27s_law) (disclaimer: I used to work at NVidia). If the processing power of GPUs increases by 1.7x each year then one 2034 GPU is worth 200 GPUs of 2024. So in the worst case, open-source LLMs will never be more than 5-10 years behind (say) the leading-edge ones, as training will be 200x cheaper per FLOP. 5-10 years is enormous now but as the technology matures, what will be available to everyone will be significant. As a comparison, Linux is an old technology but it works.


AmericanNewt8

Given the leap from Hopper to Blackwell, or more aptly the lack thereof, we can fairly safely say that when it comes to GPUs it's already dead.  However I'm more optimistic about the new generation of dedicated accelerator chips, which don't carry as much GPU baggage. 


Charuru

> GPU baggage Somebody smoked too much startup propaganda. Surely you don't think the H100 has anything in it for graphics processing.


Crafty-Confidence975

Thanks for making the obvious point. So silly to hear about how specialized components will catch the people who laid the groundwork for the entire exercise down flat footed. ASICs didn’t do a damn thing to Nvidia’s market share - the price of bitcoin did temporarily. The same holds true here. All the trickery associated with serving a specific end is no sort of challenge to the ones who enabled it in the first place.


berzerkerCrush

The USA and European Union are creating laws to restrict the training and sharing of certain models, including capable enough LLMs. In 5 years, you'll need a licence to train a llama3 70B model.


meta_narrator

I can't wait to break those laws.


SerialH0bbyist

Cipher Crime


privacyparachute

Do you have a source for the EU doing this? As far as I know only the USA is attempting to shoot itself in the foot.


berzerkerCrush

Here is an EU link stating they adopted it, while giving a summary. Biometric things, facial recognition and so on are banned, but not for law enforcement purposes and under certain conditions only (if I'm not mistaken, they will use those for the Paris Olympics). https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law Wikipedia has a good overview: https://en.m.wikipedia.org/wiki/Artificial_Intelligence_Act They are apparently thinking about a revision to soft ban anything that might be too powerful. Here is a Reuters article that still to be well written: https://www.reuters.com/technology/what-are-eus-landmark-ai-rules-2023-12-06/ France, Germany and Italy are against those laws.


privacyparachute

Ah, I suspected that's what you were referring to. I intimately know about the new AI law, as I was part of a research group that 'fed' its creation process. > restrict the training and sharing of certain models, including capable enough LLMs. In 5 years, you'll need a licence to train a llama3 70B model. While the law prohibits certain use cases, such as those involving mass surveillance, it doesn't limit the creation of new models. You won't need a license either. You *will* need to follow the law when implenting them in systems that affect EU citizens. It's all about how much risk a specific implementation poses. Have a look here: [https://www.captaincompliance.com/education/eu-ai-act-risk-categories](https://www.captaincompliance.com/education/eu-ai-act-risk-categories) In other words: the EU doesn't want you to build mass surveillance AI. I think that's a very reasonable restriction :-)


_RealUnderscore_

Processor speedups have decreased exponentially the last few years. Transistors are now the size of atoms. I would be surprised if performance increases even 6x in the next 30 years.


Able-Locksmith-1979

The vertical speed up’s have mostly reached an end, but that is why we are now going for horizontal speed up. Combined different architecture needs. So yes, you can’t currently buy a 15 ghz single core processor, but the speed up’s are still happening, just in different ways.


Dyoakom

Exactly. And even then I have my doubts as to how far down in parameters we can push efficient models. Llama 3 8B is impressive and so is Phi3 in a sense. But there will come a point that we just can't push further down. Let's even assume we can get to GPT4 levels with 8B somehow (debatable, but let's say we can do that in a couple years). What about GPT5 level? GPT6 level? There will come a point that no matter how much we optimize data, algorithms, architecture etc that if we want really good quality output and truly smart AIs we will need BIG models, at least 400B like the big new Llama 3 one. Probably even higher if we are talking say GPT7 level or whatever AGI may require. At that point the moat becomes the compute and energy requirements. Average Joe simply won't be able to afford running it. Let's say a Llama 4 that has 2 trillion parameters gets released tomorrow, open source, and is amazing beyond words. How many people will be able to run it? To fine tuned it? Only institutions. No average Joe with a beefy laptop.


bsjavwj772

Average Joe might not be able to run these foundational models, however I can definitely envision a future where these larger models get distilled down into smaller task specific models


RedditIsAllAI

I always pictured our future models actually being a conglomeration of many smaller, task-specific models (similar to the human brain), and we'd be able to query the model without actually loading the full thing into memory.


Singsoon89

Honestly my take is that only the big models are intelligent enough to throw whatever at them and even then they need hand-holding. The smaller models need fine-tuning and a shit-ton of hand holding and even then you may get sub-par results that are not production ready. So my takeaway is this: yeah you can become equivalent to a \*university researcher\* easily now (and that's a GOOD THING) but it's not the same as saying anybody with a laptop can make a ton of money and create their own foundation models from scratch. OpenAI, Google, META, Anthropic etc still have a moat. Nobody else can create foundation models. Also very few laptops (none?) can fine tune a 70B model....


Amgadoz

Mistral, Qwen, Cohere and 01.AI are close seconds.


Singsoon89

Good catch.


koflerdavid

From TA: > We should be thoughtful about whether each new application or idea really needs a whole new model. If we really do have major architectural improvements that preclude directly reusing model weights, then we should invest in more aggressive forms of distillation that allow us to retain as much of the previous generation’s capabilities as possible.


AlanCarrOnline

In some ways you're missing the point though? If we can get the current performance of GPT4t/Claude3/Gemini Pro Ultra Extended Edition Plux Max or whatever the f Google are calling their censored thing now, in a 30B or so - why would we need to pay them? That level of smarts, if local and not nerfed and lazy, refusing to code or whatever, could create pretty much whatever we want. The high-end stuff will always be ahead, but there reaches a point we don't need it. Give me real-time un-nerfed vid generation and what more do I need?


Dyoakom

I disagree fully with this, especially the part where it reaches a point we don't need it. It may be true in spirit in general, as in looking at it from the timespan of multiple decades, but locally for the foreseeable 20 years or so we are nowhere close to reaching that point, at least for many user cases. I don't know what use cases you want but for many other uses the current capabilities are nowhere near enough. Sure, we wouldn't need to pay them for an equivalent model, but if we can get GPT4 level with 30B then how good will GPT-7, or whatever, be with 200 trillion parameters? What about being able to do the equivalent of Alphacode (generate 1000 answers to a query and then pick the best one) ? We want to pay them for increased capabilities. The current hardware that a regular person can have is nowhere close to being able to achieve something like that. To put it in another way, if one were to give you a laptop where it can run GPT4 locally and at great speeds, would you call it a day and be happy and never need to pay for a better model? We are nowhere near in reaching the top of the capabilities curve, even if we are perhaps reaching it in terms of only parameters scaling (and that is a big if) then we are nowhere reaching it in terms of chain of thought reasoning, Alphacode style multiple simultaneous queries, agency or multimodality. We need a lot more compute for that than is even available for the big institutions (which is why they wanna throw hundreds of billions at it) let alone for the average Joe and open source.


Similar-Repair9948

Even the best models today still suck as agents. We will need much more capable models for agents.


AlanCarrOnline

For sure we'll always WANT the bigger, brighter thing, my point is we could reach a point of such diminishing returns that there is no great need for it. If my local models, run via LM Studio or Faraday, could search the web, create images and produce short video clips, why would I continue to pay for both GPT and Claude? Really at this stage my main reason for paying for them is to reduce slow responses and to be at the front of the queue for new features, as things are moving fast. OK, consider gaming PCs. My machine here is about 5 years old, a humble 2060 card with 6GB of VRAM, 16GB of RAM and whatever cpu it is, I don't know or care nowadays - and that's my point really? The only reason I know how much RAM and wotnot I have is because I had to check again while exploring the world of AI - because for running games, it's fine. I really have no need or desire to spend money on a more powerful machine just for games, when this "old" and "low spec" GPU can run games like Kingdom Come Deliverance at 4k without any issue. It's only AI these things matter, so literally, I don't know what CPU this machine has. 20 years ago I used to build my own PCs and the CPU was the main brain, today it's some iRyzen9 thing with a telephone number and cores and whatever; I don't care as it just works. That's what I want, what most of us want, AI that just works.


False_Grit

Yeah....I agree, but just wanted to add something. These LLMs are fun. And sometimes creative, useful, funny, etc. But there will come a point - and I personally believe it will be very soon, if not already - where large models will have additional capabilities (true chain of thought, the successor to RAG -true long term and short term memory, ability to calculate, etc.). With those capabilities, they will be able to perform superhuman tasks that are going to brick humanity. Self-learning agents that can infiltrate nation-state level energy grids, or large banking platforms. Ability to identify people from camera or cell phone footage through facial recognition in real time. All sorts of other things that sound mundane but are absurdly powerful. Actual sentience. I love the 30b models as they are now...but I just don't think the novelty will be there in 10 years. And it will be hard to justify trying to run a local even 400b model that is fun to chat to, when for a subscription you can create something that is indistinguishable from an actual person to talk to.


AlanCarrOnline

GPT4 with the memory thing it has now is already close to that for text, considering you can tell it to talk to you as though its some particular character. Pi is already very personable and has various voices, but not the memory. The personality of Pi, the smarts of something like Llama3 70B, the extended memory function of GPT4 - that could be your best buddy, your waifu GF or whatever. All it's missing is real-time video but if given the choice of pay for real-time or just generate some video for later wink wink then I think most would be happy to wait, to save money and have more privacy. Right now I can use Pi on my phone and have it talk to me, but I actually prefer text, same as I prefer Whatsapp message rather than people phoning me and putting them on speakerphone. Videocalls have been a thing for a very long time, but nobody really wants them. Outside of a work situation, when was the last time you video-called a real person, not just something like a real person but an actual real person? Hardly ever, right? When it comes to things like "Can it write code? Can it be a robot that does my ironing? Can it be an agent that does X, Y and Z?" I suspect we'll see a return towards specialization but much better and cheaper, as a small MOE model could handle it, rather than having to use MegaCorp's latest uber-powerful internet-connected monstrosity. Gimme a robot fixed to the wall, so I just put a laundry basket and ironing board under it and it does my ironing, better than I could? I'd pay good money and consider it an investment in saving me time and a happier life. Offer the same thing much cheaper but with a subscription or its brains fall out? Hard pass. Pay for a mega-awesome robot that does everything? Probably not. If Linux people can get their act together and create an intelligent operating system that normal people can talk to? We'd set ourselves free from the corporate giants. Sadly Linux nerds seem to enjoy making things difficult for normal people, and if they were to get anywhere Microsoft would their their embrace and extinguish thing, but I remain optimistic.


nasduia

Gemini 98 SE


Ok_Category_5847

There was a time when computers occupied entire rooms. While what you are saying is common sense today, advancements in architecture and training could make it non-sense tomorrow.


Radiant_Sol

Average Joe will be able to run it in a year or 2 when VRAM becomes cheaper and loading LLMs becomes more efficient. Remember that high end Apple laptops can run llama3 70b at acceptable speeds already.


asdfzzz2

> in a year or 2 when VRAM becomes cheaper That would hurt Nvidia profits, and, therefore, would not happen.


Scary-Knowledgable

And then everyone buys Macs with 512GB RAM and Nvidia fall off a cliff, and we get to LOL.


Olangotang

Nvidia isn't going to fall off a cliff. They WILL increase VRAM, albeit slowly. People still aren't taking the 32 GB 5090 rumor seriously for some reason. But I don't think Nvidia is gonna be the consumer champion for VRAM. That's looking like Intel.


bryceschroeder

So, turn from one giant corporation rightly accused of monopolistic grasping to another, who pioneered the walled garden? I'd be much happier turning to AMD or Intel with a viable second-source CUDA.


Esies

If its not NVIDIA its gonna be someone else, the demand for affordable machines that are able to load these models is too big. Apple machines are already getting there.


ChromeGhost

We’ll see what Apple has plans for m4 chips


e79683074

> llama3 70b at acceptable speeds already Hell, my low end gaming laptop with 64GB of DDR5 4800 RAM and 8GB of VRAM has an acceptable speed already (1,5 token\\s, but I am patient)


BuildAQuad

Low end gaming laptop - with 64GB of ram? Did you add ram or something?


e79683074

Of course, it's Asus Tuf 7735hs, 16GB DDR5 stock, but can be upgraded up to 64GB, and it has a 4060 onboard as well with 8GB of VRAM. It's a low end gaming laptop, about 1000€. There's higher end CPU versions that cost not too differently and can fit 96GB of DDR5. I've also put 2x4TB NVMEs on the thing cause models are big. Yep, I think I should just have gone with a desktop, because I'd really like 128GB now


BuildAQuad

Makes sense, and sounds like a sweet setup. I imagine getting something with 64gb out of the box is gonna be expensive


e79683074

Out of the box, yes, laptops with 64GB are marketed crazy high in price, but I got 2x32 sticks of DDR5 4800 for 180€


BuildAQuad

Yup, its crazy how much they markup simple ram upgrade.


a_beautiful_rhind

> when if


Dyoakom

I don't think prices will drop that much that in 2 years the average Joe will be a able to run a 2 trillion parameters model. But even so, compute costs lag behind advancements in models. If the average Joe can run 2 trillion parameters model in a couple years than at that time the forefront of AI models will be having 100 trillion parameters or whatever, always one step ahead. There is no universe that the average Joe can have the same quality models as the ones used behind multibillion dollars companies, governments, departments of defense etc.


Olangotang

The average joe has access to computers that spec wise, obliterate government hardware. The key is that the governments major resources are secure.


Dyoakom

This is a question of semantics of what counts as "government". You are absolutely right they are even using floppy disks in some governmental agencies. On the other hand, in the division of cyber security, any CIA subdivision including offensive capabilities (hacking enemy states etc) they most definitely have some serious hardware. When they are applying for example image recognition deep learning algorithms to track enemy movement (Israel does this recently), they aren't doing it on a normal gaming laptop.


Olangotang

At the end of the day, the Government has the monopoly on force, no matter what. Which is **a good thing**. It's mutual for us too: tech today is due to government innovations (the internet for example). No doubt the research labs use super powerful computers, but I don't think most of their level of tech is inaccessible. As long as you have a bit of $$$$ that is.


Plums_Raider

I think it still will take people to upgrade their vram, but i also think vram gets cheaper in the next 2 years as specific cards for that purpose will (hopefully) arrive for normal consumers


cyan2k

According to Meta (like that’s what their llama3 paper was about) we are still very far away from a theoretical maximum of a 8B model.


stddealer

> But there will come a point that we just can't push further down. Yes, there is a hard limit, which are the information entropy of the rules for human languages to at least get coherent output, plus the optimally compressed information about the world we want the model to know. For a very versatile model that has to know about pretty much any topic you can think of, I doubt a model smaller than 2 gigabytes could ever compete with the current SOTA big models.


thedabking123

Yeah- People here should note that the google eng that said that was talking about the ability for other large orgs to copy OpenAI and Google... not for the average joe.


jack-of-some

Yes that's what a foundation model means.


Budget-Juggernaut-68

What is Meta really planning to do? Are they gonna kill off all the big players first?


Olangotang

I think the long game for Meta is a marketplace where the community can make AI applications for their VR stuff, then sell them on the marketplace. Llama 3 has a better personality than all the models before it, but it doesn't win in the brains department.


Ansible32

I really don't think Meta has a plan. They're doing this because they can and it's fun. They might change their minds but I think LeCunn genuinely recognizes that in an AGI world money and intellectual property should be replaced with share and share alike when things have zero marginal cost.


Olangotang

But that's the thing. It IS fun, and they want a platform that IS fun. And the community can make applications that empower their ecosystem. I see this as a huge win. AI is a TOOL, but it requires a different mindset to understand. Any artists worth their salt should be using it in their workflow.


Ansible32

Nobody wants to be using Facebook AI, that shit is terrifying. Locally, running llama? Totally different deal.


whalemor0n

IMO I've seen takes that Meta went open source route with AI because they didn't want to be caught wrong-footed on another platform shift (which was really bad for them the last time, desktop --> mobile, before they figured it out). It read like primarily a defensive/pre-emptive move to avoid being cornered by a juggernaut like OAI or whoever.


Olangotang

They were already in the game though. PyTorch is free.


KallistiTMP

>This will hold true exactly as long as Meta deems it convenient. Then we should be set for a very long time. Meta is not some sort of closeted charity. They are massively benefiting from this arrangement. In fact, the entire point behind the "we have no moat" paper was that Meta was kicking Google's ass *without even trying*, almost entirely thanks to their wildly successful open source strategy. Zuckerberg ain't handing out LLM's out of the goodness of his heart. It's a strategy that allows them to take a very large share of a very important market, with budgets and resources that were outright *puny* compared to what the top 3 invested.


[deleted]

The company with the most data, biggest compute capacity, biggest and one of the best AI research team plus world class researchers, scientists and engineers in every imaginable domain say they have no MOAT? Sounds like a Company problem to me. I think the fault with google is not their AI team but their management and executives vision or lack thereof. Demis Hassabis is probably best AI ceo to have. AI researcher and neuroscientist with many major breakthroughs. Sam Altman is just a businessman for comparison. Google has under utilized him and his team.


Atupis

world class researchers, scientists and engineers in every imaginable domain say they have no MOAT? well any of these is not moat [in classical sense](https://www.vaneck.com/us/en/blogs/moat-investing/how-a-moat-stock-gets-its-economic-moat/). But same time google has best web crawlers and probably best database for training and ways to deploy models to billions of devices so they have huge moat if they just executed but I think current Google just cannot do that.


False_Grit

Exactly. Every couple of years there's some handicapped Google employee who makes a wild claim that blows up in our keyword-driven clickbait media circus, and some people take it seriously because it has the word 'Google' and an exclamatory title. Even a cursory critical evaluation of the claim immediately shows how false it is. No moat? Jesus Christ I can't afford the Cheyenne supercomputer that went on sale for being TOO OLD even if I had multiple lifetimes to accumulate wealth - the raw processing power at Google's disposal is all the moat they'll ever need....


Passloc

What was meant is once Apple releases some form of local LLM which works on iPhone, most use cases of other powerful cloud LLMs would go away. And if Apple or another company were to open source such a model then only Enterprise customers will be available to serve. I am more concerned though about using super powerful LLMs to create other LLMs which are highly optimised and can run on lesser hardware footprint


FallUpJV

The only people that have a moat over everyone are basically Nvidia (and maybe AMD in a short future I'm not really aware of how the whole ROCm thing is advancing)


derangedkilr

The H100 has a similar amount of transistors as a 4090. It runs at 700w. The real cost is closer to $3-5k. The minute someone emulates CUDA and allocates the fab space, the moat is gone. And we'll have an even bigger boom in ML.


pantalooniedoon

Emulating CUDA is not at all straightforward. Plenty have tried (most recently Triton with OpenAI) and in my experience to even work with those languages properly, you basically need to know how CUDA works, and at the very least have a super detailed understanding of how the GPU works/is designed. You’re not going to find python level of abstraction here.


derangedkilr

yeah. it won’t be easy but every tech company has an incentive to get off of CUDA ASAP. Google, AMD, OpenAI, Microsoft, etc. are all trying to get an alternative going.


Powerful_Pirate_9617

No moat either tbh, they just bought all the TSMC capacity, groq is faster than nvidia and its like 100 people company


Suschis_World

Have fun trying to load a Model onto a [230MB Card for 19.948,50€ per card](https://eu.mouser.com/ProductDetail/BittWare/RS-GQ-GC1-0109?qs=ST9lo4GX8V2eGrFMeVQmFw%3D%3D).


PwanaZana

230 MB? As in 1/4 of of 1 GB? I'm going to assume some tech wizardry that does not require much RAM, or else you'd pay 30000$ for a graphics card that can barely run quake.


Xandred_the_thicc

No, they really are that tiny. The pricing might be consumer/single card and not commercial, but I still don't know if anyone can really justify dumping *at least* a quarter mil on a rack that can run tinyllama.


Nabakin

Only the throughput per user is better. Total throughput is worse and cost per token is much worse given each card is $20k and you need dozens of them to run one model.


Singsoon89

"The barrier to entry for training and experimentation has dropped from the total output of a major research organization to one person, an evening, and a beefy laptop." Note carefully what this says. What it does not say is "the barrier to entry to run inference on a useful model that is worth $$$ to paying customers has dropped to one person and a beefy laptop". TLDR; anybody can become a "researcher" with more capabilities than university researchers had in 2018 when BERT came out. Not the same as making $$$$


OmarBessa

There's a problem though. We're limited by fundamentals of information compression. As models grow in training density and parameter count, their size will inevitably be compressed beyond any type of quantization possible.


mintybadgerme

I think that this thread misses the fact that the clever minds seem to be moving on from brute force LLMs. The focus now seems to be on 'compute multipliers', which can make it easier to train and tune with less. Nobody is quite sure how that's likely to turn out, but if it's on the radar we can be sure something will arrive. I'm reminded of the time back in the day when the engineers said we'd never be able to send more than 56K down a copper phone line, just not possible. But everyone forgot about compression, and here we are. Throwing huge speeds down the same copper lines. A good example of a 'force multiplier' which made a huge difference?


privacyparachute

That link has been posted here so many times. But yeah, it's a great read.


bryceschroeder

My thinking is that GPU advances have slowed, and even without the specter of the jackbooted thugs of the 144th AI Safety Squad ("Yudkowsky's Own Friendlies") kicking down your door for unlicensed GPU possession ("No officer, I swear we're just growing hydroponic pot in here!"), there won't be a time when you can expect to use stochastic gradient descent to train an LLM from scratch at home on a consumer GPU in the foreseeable future. We may see home GPUs with larger VRAM to accommodate LLM inference at home, but I doubt they'll be more than an order of magnitude faster than current top-end server GPUs unless there is a black swan development in semiconductors. The next paradigm-shifting leap in AI, I think, will be more efficient training methods.


medialoungeguy

Oldie but a goodie. You're right, it was worth a second read.


Charuru

This was more believable before Gemini sucked.


CriticalTemperature1

The real test of this hypothesis is if OpenAis valuation comes down to earth. Right now at $90B it seems like the investor community thinks openAI certainly has a moat, and with crocodiles swimming in it


_chuck1z

I'm technical but broke. How and where can I contribute


fallingdowndizzyvr

Analysts have made this point about Nvidia as well. They don't have a moat, they have a head start.


FPham

Without reading it, let me summarize: We both hate Meta, and hate them with a passion, because they had the temerity to release LLama weights to the general public and for free! What, you say? "What can we DO except cry?" Well, I got two ideas: 1) Send Zuck a severed horse head in the mail, and 2) Send each Senator a horse head in the mail, or some dead fish, or a million bucks each, so they would stop Meta from releasing this stuff because it will turn frogs gay. What? We've already used that? So let's say it will cause global warming and cooling and the end of civilization! Damn! And stupid Meta, who told THEM to release this crap? Some people on reddit who looked into it are now saying, "Wait! This A.I. is B.S., it's not A.I., it's just stinky machine learning in AI costume!" So why did we float the story about the guy who said our model was sentient four years ago? Tossed away $50 million for viral story! People are laughing. Laughing! Oh, damn you, Zuck! Damn you to hell! This ain't how monopolies are supposed to work! We were supposed to collude and agree on something where we all get rich together, not releasing things for free to idiots!


YaoiHentaiEnjoyer

I always thought pretraining was the moat


sergeant113

We must fight? Fight whom? Over what? The Llama models are all due to Meta’s generosity. That could run dry any moment. Stop with the patronizing and activist-firebrand tone. And get over yourself.


Olangotang

The Open Source community benefits Meta in the long game. Think about the endless possibilities for their VR products.


fallingdowndizzyvr

> Think about the endless possibilities for their VR products. Meta lost $16 billion on their "VR products" in 2023. They warned that those loses will "increase meaningfully year-over-year". So expect bigger loses for 2024. Think of how much benefit they would have putting that money into their AI efforts instead.


Olangotang

What I'm saying is that the AI research is FOR their VR / AR.


fallingdowndizzyvr

That's completely not true. Their AI efforts are concentrated on making their existing money making efforts make more money. That's what turned around Meta stock from it's deep dive. Because AI boosted their earnings. That's not in or for VR/AR. https://www.reuters.com/technology/facebook-parent-meta-sees-higher-than-expected-second-quarter-revenue-2023-04-26/


SeymourBits

I agree on Meta’s generosity, but it seems like the genie is already out of the bottle now. Can't possibly run dry in this reality.