Powered by RND
Ouça AI + a16z na aplicação
Ouça AI + a16z na aplicação
(1 079)(250 081)
Guardar rádio
Despertar
Sleeptimer

AI + a16z

Podcast AI + a16z
a16z
Artificial intelligence is changing everything from art to enterprise IT, and a16z is watching all of it with a close eye. This podcast features discussions wit...
Ver mais

Episódios Disponíveis

5 de 29
  • REPLAY: Scoping the Enterprise LLM Market
    This is a replay of our first episode from April 12, featuring Databricks VP of AI Naveen Rao and a16z partner Matt Bornstein discussing enterprise LLM adoption, hardware platforms, and what it means for AI to be mainstream. If you're unfamiliar with Naveen, he has been in the AI space for more than decade working on everything from custom hardware to LLMs, and has founded two successful startups — Nervana Systems and MosaicML. Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
    --------  
    43:12
  • Building Developers Tools, From Docker to Diffusion Models
    In this episode of AI + a16z, Replicate cofounder and CEO Ben Firshman, and a16z partner Matt Bornstein, discuss the art of building products and companies that appeal to software developers. Ben was the creator of Docker Compose, and Replicate has a thriving community of developers hosting and fine-tuning their own models to power AI-based applications.Here's an excerpt of Ben and Matt discussing the difference in the variety of applications built using multimedia models compared with language models:Matt: "I've noticed there's a lot of really diverse multimedia AI apps out there. Meaning that when you give someone an amazing primitive, like a FLUX API call or a Stable Diffusion API call, and Replicate, there's so many things they can do with it. And we actually see that happening — versus with language, where all LLM apps look kind of the same if you squint a little bit."It's like you chat with something — there's obviously code, there's language, there's a few different things — but I've been surprised that even today we don't see as many apps built on language models as we do based on, say, image models."Ben: "It certainly maps with what we're seeing, as well. I think these language models, beyond just chat apps, are particularly good at turning unstructured information into structured information. Which is actually kind of magical. And computers haven't been very good at that before. That is really a kind of cool use case for it. "But with these image models and video models and things like that, people are creating lots of new products that were not possible before — things that were just impossible for computers to do. So yeah, I'm certainly more excited by all the magical things these multimedia models can make.""But with these image models and video models and things like that, people are creating lots of new products that were just not possible before — things that were just impossible for computers to do. So yeah, I'm certainly more excited by all the magical things these multimedia models can make."Follow everyone on X:Ben FirshmanMatt BornsteinDerrick HarrisLearn more:Replicate's AI model hub Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
    --------  
    41:49
  • The Best Way to Achieve AGI Is to Invent It
    Longtime machine-learning researcher, and University of Washington Professor Emeritus, Pedro Domingos joins a16z General Partner Martin Casado to discuss the state of artificial intelligence, whether we're really on a path toward AGI, and the value of expressing unpopular opinions.  It's a very insightful discussion as we head into an era of mainstream AI adoption, and ask big questions about how to ramp up progress and diversify research directions.Here's an excerpt of Pedro sharing his thoughts on the increasing cost of frontier models and whether that's the right direction:"if you believe the scaling laws hold and the scaling laws will take us to human-level intelligence, then, hey, it's worth a lot of investment. That's one part, but that may be wrong. The other part, however, is that to do that, we need exploding amounts of compute. "If if I had to predict what's going to happen, it's that we do not need a trillion dollars to reach AGI at all. So if you spend a trillion dollars reaching AGI, this is a very bad investment."Learn more:The Master Algorithm2040: A Silicon Valley SatireThe Economic Case for Generative AI and Foundation ModelsFollow everyone on Z:Pedro DomingosMartin Casado Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
    --------  
    38:02
  • Neural Nets and Nobel Prizes: AI's 40-Year Journey from the Lab to Ubiquity
    In this episode of AI + a16z, General Partner Anjney Midha shares his perspective on the recent collection of Nobel Prizes awarded to AI researchers in both Physics and Chemistry. He talks through how early work on neural networks in the 1980s spurred continuous advancement in the field — even through the "AI winter" — which resulted in today's extremely useful AI technologies.Here's a sample of the discussion, in response to a question about whether we will see more high-quality research emerge from sources beyond large universities and commercial labs:"It can be easy to conclude that the most impactful AI research still requires resources beyond the reach of most individuals or small teams. And that open source contributions, while valuable, are  unlikely to match the breakthroughs from well-funded labs. I've even heard heard some dismissive folks call it cute, and undermine the value of those."But on the other hand, I think that you could argue that open source and individual contributions are becoming increasingly more important in AI development. I think that the democratization of AI will lead probably to more diverse and innovative applications. And I think, in particular, the reason we should expect an explosion in home scientists — folks who aren't necessarily affiliated with a top-tier academic, or for that matter,  industry lab — is that as open source models get more and more accessible, the rate limiter really is on the creativity of somebody who's willing to apply the power of that model's computational ability to a novel domain. And there are just a ton of domains and combinatorial intersections of different disciplines."Our blind spot for traditional academia [is that] it's not particularly rewarding to veer off the publish-or-perish conference circuit. And if you're at a large industry lab and you're not contributing directly to the next model release, it's not that clear how you get rewarded. And so being an independent actually frees you up from the incentive misstructure, I think, of some of the larger labs. And if you get to leverage the millions of dollars that the Llama team spent on pre-training, applying it to data sets that nobody else has perused before, it results in pretty big breakthroughs."Learn more:They trained artificial neural networks using physicsThey cracked the code for proteins’ amazing structuresNotable AI models by yearFollow on X:Anjney MidhaDerrick Harris Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
    --------  
    40:12
  • How GPU Access Helps AI Startups Be Agile
    In this episode of AI + a16z, General Partner Anjney Midha explains the forces that lead to GPU shortages and price spikes, and how the firm mitigates these concerns for portfolio companies by supplying them with the GPUs they need through a program called Oxygen. The TL;DR version of the problem is that competition for GPU access favors large incumbents who can afford to outbid startups and commit to long contracts; when startups do buy or rent in bulk, they can be stuck with lots of GPUs and — absent training runs or ample customer demand for inference workloads — nothing to do with them. Here is an excerpt of Anjney explaining how training versus inference workloads affect what level of resources a company needs at any given time:"It comes down to whether the customer that's using them . . .  has a use that can really optimize the efficiency of those chips. As an example, if you happen to be an image model company or a video model company and you put a long-term contract on H100s this year, and you trained and put out a really good model and a product that a lot of people want to use, even though you're not training on the best and latest cluster next year, that's OK. Because you can essentially swap out your training workloads for your inference workloads on those H100s."The H100s are actually incredibly powerful chips that you can run really good inference workloads on. So as long as you have customers who want to run inference of your model on your infrastructure, then you can just redirect that capacity to them and then buy new [Nvidia] Blackwells for your training runs."Who it becomes really tricky for is people who bought a bunch, don't have demand from their customers for inference, and therefore are stuck doing training runs on that last-generation hardware. That's a tough place to be."Learn more:Navigating the High Cost of GPU ComputeChasing Silicon: The Race for GPUsRemaking the UI for AIFollow on X:Anjney MidhaDerrick Harris Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
    --------  
    39:08

Mais podcasts de Tecnologia

Sobre AI + a16z

Sítio Web de podcast

Ouve AI + a16z, FT Tech Tonic e muitos outros podcasts de todo o mundo com a aplicação radio.pt

Obtenha a aplicação gratuita radio.pt

  • Guardar rádios e podcasts favoritos
  • Transmissão via Wi-Fi ou Bluetooth
  • Carplay & Android Audo compatìvel
  • E ainda mais funções

AI + a16z: Podcast do grupo

Radio
Aplicações
Social
v6.29.0 | © 2007-2024 radio.de GmbH
Generated: 12/5/2024 - 2:42:02 AM