Conversation

So following along with all the palace intrigue over at OpenAI this week has me wondering --

Who is working seriously on commercializing the current level of AI tech?

OpenAI's attention seems to be split between actually exploiting LLMs and a quasi-religious search for "AGI." Anthropic, uh, has a pricing page! Google is Google.

Is it seriously just Microsoft that has their eye on the ball here?

2
0
0

Related:

Do AI researchers have to be lunatics to do their work, or is it just that the lunatics are colorful so they get attention?

Maybe I'm just too disengaged from the tech to understand what's actually possible but they seem to be vulnerable to a common set of modeling failures.

2
0
0

@nat what if senior counsel keeps raining on that parade? in a “no no no if we start charging for it we’ll be responsible for what it says” kind of way?

1
0
0

@airshipper Yeah that's basically my incredulity that Microsoft somehow appears to be the company most earnestly exploring how to commercialize it, and it's the tiny startups that are being (relatively) cautious.

Kinda feels like the company that combines an appetite for risk and a focus on the actual capabilities that the tech has *now* doesn't exist yet but will soon.

1
0
0

@airshipper Something finally clicked for me this weekend with the custom GPTs and the capability these things have for taking *almost* structured data and turning it into actually structured data, without a human data entry step -- this is a limiting factor on a lot of software businesses AFAICT but it also doesn't seem like anyone has actually cracked the interface for it yet.

0
0
0

@nat My read is that the lunacy around AI research is specifically a Bay Area phenomenon related to the TESCREAL cultists.

1
0
0

@nat A lot of AI researchers I respect (Francois Chollet, Andrew Ng) do not fall for the AGI nonsense and have written very good articles on why it's highly improbable if not impossible. But some famous AI researchers clearly do believe that AGI is imminent! It really baffles me...maybe an extreme version of ELIZA effect in action?

1
0
0

@hana Ooh "The Implausibility of Intelligence Explosion" yes this is relevant to my interests

I'd buy the ELIZA effect but I've also known folks who seemed irrational about AGI even before they got into the field so I wonder if there's something else that makes people vulnerable to it

0
0
0

@ratkins Yeah so like the theory being that being involved in TESCREAL circles leads people to both get involved in AI research and a tendency towards weird ideas generally?

I would say "man how much of this is just psychedelics" except that I've known folks who were That Whole Way even before any exposure to psychedelics.

1
0
0

@nat I just think it’s bog-standard mass hysteria with a particular SV flavour. VCs are notorious in-group fart-huffers and I’m not sure anyone in SV realises just how crazypants that place looks to outsiders, especially from here in Europe.

1
0
0

@ratkins Not that this contradicts the core of what you're saying but I'll note -- Effective Altruism at least isn't just (or even primarily) a SV VC phenomenon. Until recently it's been primarily Berkeley and Boston and primarily not folks working at startups. VCs and big tech exec types have gotten into that set of ideas over the last few years, but it's a distinct social circle from the core rationalism and EA folks, and that's where the deep AI wackiness seems to be centered.

2
0
0

@nat Fair enough, but “Berkeley” not being “SV” is a distinction almost nobody outside California—and literally nobody outside the US—makes 🫣.

1
0
0

@nat @ratkins I’m not sure how recent you mean, but IIRC, I was starting to get weird vibes from EA at least in 2015, if not earlier. I don’t really have a good idea of the extra-SV EA community, but regardless, the term for me has mostly been suborned by the lesswrong crowd.

1
0
1

@ratkins Sure and lots of Americans think that Europe is a country

It's not an important distinction at all scales but there's a real difference between the two geographic areas and the cultural/social zones attached to them

2
0
0

@alpha @ratkins Oh the EA folks have pretty much always had a light scientific racism/eugenics vibe, since even before that.

But the ones I knew at least were mostly in academia or at places like Google because that was who working on things like natural language processing. They weren't doing startups or hanging out with guys like Altman.

Even the "earn to give" thing is *relatively* recent. AFAICT they came into contact with that world via Bitcoin.

2
0
0

@alpha @ratkins Granted I *mostly* know the ones who were busy writing/reading blogs, living in group houses, and complaining about polyamory in 2014 so it's possible there were folks who were doing the startup thing earlier

0
0
0

@nat @ratkins Oh yeah, that’s fair, I might be conflating the bad vibes I had from EA and overlaying my current negative TESCREAL associations on top of past other negative things about it.

It has been a while though that it’s been associated w/the lesswrong AI rats in my head, but I honestly don’t remember when that started happening.

I have to assume it’s one of those context-collapse things, where we’re naturally most aware of the parts of EA that we encounter and it’s actually a pretty wide umbrella.

1
0
1

@nat @ratkins It’s interesting now that one of my main communities nowadays is actually Euro-centric (though still very tech-heavy) finding out what’s similar and different between the cultures.

0
0
1

@nat I’ve always said that there’s about as much difference (and as much in common) between a New Yorker and a New Mexican as there is between a Swede and a Spaniard, you guys just got on to that common currency thing a couple of hundred years earlier 😛

0
0
0