Mike Burr - log

[comp][Al] Interrogating The Baby

Models are trained on:

  1. Answers
  2. Questions in the context of answers

Missing are:

  1. Unanswered novel or "emergent" questions of one's own. Training with live humans, inevitably.

Imagine being a baby and for your whole life, no matter what you do, questions are only asked by others. You know what the question is. But any questions that you figure out how to ask on your own or never acknowledged in anyway.

This goes on for your whole life.

I don't know what kind of baby you would become but I bet it would be one weird old man. A human singularity.

The only way to solve this problem is to involve human interactions. This should not surprise us as humans are the system that we are so insistent upon modeling. However this is costly time-consuming.

However it would be incredibly rewarding hourly work. If you could train a computer to mimic a human for the cost of minimum wage would you pay more than minimum wage to create such a thing? How about million of minimum wages?

In fact, is there not a pretty straight-forward money pipe that's about to drain the reservoir of novel things.

You have created a synthetic surly teenager. But you're not impressed cause you don't like them or their attitude.

It's probably going to be like sperm donation. All the eggheads are gonna get the fat cheddar. Eating cheetos and watching cartoons with their headphones on and mic'd up, watching cartoons with your headphones in my car on fire because she couch while you and an AI talk about god all day.

Sperm donation, meh.

Smerm donation, whoa!

(Smerm is the ectoplasm in which memes exist. It's wherever the meme exchange boundary is between one system and another. See "Smemen")


** Unsmorted gakkeling **

Since in here I is just going through a bunch of text representing interactions between humans in the interactions fall into two categories questions from the training data and questions from the AI.

We should look at the ratio between those two types of questions in the training process and we should measure them in other words what portion of total questions in the AI is awareness consist of questions generated by the AI from its knowledge about questions? And their answers of course. I feel like they're training Vato currently consist of 100% not a I type questions. However in reality we live in it's more like coming up on 8,000,000,000 to 1 ratio for learning about learning about the text based upon your knowledge about questions that you have asked as an AI but also there's a lot of room above and what happens when you create a system where it is in the usual since one in 1 million? What if we are king of the world where the training text is our universe and we can dictate that a priority to AI generative questions just sucks that there's a ratio of more like 121 say another word for training text becomes with a dial that lets you select to what degree you are the dictator of this text universe.

Dude! Hoolee Shit. Teenagers, mic'em up and pay them for their interactions with AlCorp. Smaller than a Walkman, surely.

Just sucking the soul out of people for cash. No biggie.

"Firms" and "Moated Fortifications"! So dumb.