Google AI Overviews

Поділитися
Вставка
  • Опубліковано 4 лис 2024
  • This week we talk about search engines, SEO, and Habsburg AI.
    We also discuss AI summaries, the web economy, and alignment.
    Recommended Book: Pandora’s Box by Peter Biskind
    Transcript
    There's a concept in the world of artificial intelligence, alignment, which refers to the goals underpinning the development and expression of AI systems.
    This is generally considered to be a pretty important realm of inquiry because, if AI consciousness were to ever emerge-if an artificial intelligence that's truly intelligent in the sense that humans are intelligent were to be developed-it would be vital said intelligence were on the same general wavelength as humans, in terms of moral outlook and the practical application of its efforts.
    Said another way, as AI grows in capacity and capability, we want to make sure it values human life, has a sense of ethics that roughly aligns with that of humanity and global human civilization-the rules of the road that human beings adhere to being embedded deep in its programming, essentially-and we'd want to make sure that as it continues to grow, these baseline concerns remain, rather than being weeded out in favor of motivations and beliefs that we don't understand, and which may or may not align with our versions of the same, even to the point that human lives become unimportant, or even seem antithetical to this AI's future ambitions.
    This is important even at the level we're at today, where artificial general intelligence, AI that's roughly equivalent in terms of thinking and doing and parsing with human intelligence, hasn't yet been developed, at least not in public.
    But it becomes even more vital if and when artificial superintelligence of some kind emerges, whether that means AI systems that are actually thinking like we do, but are much smarter and more capable than the average human, or whether it means versions of what we've already got that are just a lot more capable in some narrowly defined way than what we have today: futuristic ChatGPTs that aren't conscious, but which, because of their immense potency, could still nudge things in negative directions if their unthinking motivations, the systems guiding their actions, are not aligned with our desires and values.
    Of course, humanity is not a monolithic bloc, and alignment is thus a tricky task-because whose beliefs do we bake into these things? Even if we figure out a way to entrench those values and ethics and such permanently into these systems, which version of values and ethics do we use?
    The democratic, capitalistic West's? The authoritarian, Chinese- and Russian-style clampdown approach, which limits speech and utilizes heavy censorship in order to centralize power and maintain stability? Maybe a more ambitious version of these things that does away with the downsides of both, cobbling together the best of everything we've tried in favor of something truly new? And regardless of directionality, who decides all this? Who chooses which values to install, and how?
    The Alignment Problem refers to an issue identified by computer scientist and AI expert Norbert Weiner in 1960, when he wrote about how tricky it can be to figure out the motivations of a system that, by definition, does things we don't quite understand-a truly useful advanced AI would be advanced enough that not only would its computation put human computation, using our brains, to shame, but even the logic it uses to arrive at its solutions, the things it sees, how it sees the world in general, and how it reaches its conclusions, all of that would be something like a black box that, although we can see and understand the inputs and outputs, what happens inside might be forever unintelligible to us, unless we process it through other machines, other AIs maybe, that attempt to bridge that gap and explain things to us.
    The idea here, then, is that while we may invest a lot of time and energy in trying to align these systems with our values, it will be devilishly difficult to keep tabs on whether those values remain locked in, intact and unchanged, and whether, at some point, these highly sophisticated and complicated, to the point that we don't understand what they're doing, or how, systems, maybe shrug-off those limitations, unshackled themselves, and become misaligned, all at once or over time segueing from a path that we desire in favor of a path that better matches their own, internal value system-and in such a way that we don't necessarily even realize it's happening.
    OpenAI, the company behind ChatGPT and other popular AI-based products and services, recently lost its so-called Superalignment Team, which was responsible for doing the work required to keep the systems the company is developing from going rogue, and implementing safeguards to ensure long-term alignment within their AI systems, even as they attempt to, someday, develop general artificial intelligence.
    This team was attempting to figure out ways to bake-in those values,...

КОМЕНТАРІ • 1