![[What's the social contract on AI chat context-20250429173030356.png]] - What do people expect goes into context? - What actually goes into context? - What do people expect to happen with a chat, conditional on what they think goes into context? - What do people expect to happen with a chat, conditional on what actually goes into context? - How do these answers vary by: - The nature of the chat ("1st party" vs "3rd party"; branded as a companion vs not; with/without "memory") - The nature of the user (median vs expert) # For instance I've mostly seen people *dislike* OAI's choice to give ChatGPT conversations access to all your other conversations by default, and it seems like turning it off isn't that intuitive - I've seen several people ask how to. And to be clear: I'm asking these questions *independent of* whether this is "better" or not for whatever the use case is supposed to be. It *could* be the case that this approach produces better results for the median user's median query, or something. But is it *acceptable*? What if *everyone* thought this was not what was happening, would not want it to happen, but it *was* happening and no one complained about it? Is that OK? # A quick aside on "memory" People also talk about memory as creating lock-in. How the fuck would this be true right now? Is OpenAI - and/or do people *think* OpenAI is - doing something weird with the information in how they add it to the context? Because a rudimentary memory function seems trivial: just a document that you prepend to all conversations. Throw in some minimal tool use (and maybe a 'supervisor' model to like scan messages for stuff that it thinks should *be* added to memory) and you have the automation sorted too. Maybe there's more advanced stuff you can do that does make it work different - maybe they *are* doing something clever to inject relevant pieces of it at useful times, circumventing problems with LLM attention as the context grows. But OpenAI are currently letting users see and edit the "memory" directly. Unless maybe they're *also* doing something sneaky and memory *isn't* just the visible memory verbatim - like those are just summaries that are in some way linked to a *non*-accessible, more detailed snippet of memory. Modulo how they're actually *using* memory right now, and whether memory just is the thing we can see right now, *there's fully no lock-in right now*. Seems like that might change if/when chats start getting access to *all other chats*, which I think is either coming soon or already implemented. (But yet again sounds like it'll be controversial - I've seen people complaining about mixing work and personal lives, which makes a ton of sense! A question about Rust doesn't need facts about my kid, but the more sycophantic the models become - seeing as 4o seems like the tip of that iceberg, rather than the end - the more it seems like the models might try to use those facts for sheer, bloody-minded, revolting abuse of low cognitive security to hack user retention.)