Food for thought: Never underestimate the influence of cultures of materialism behind the extreme hype over new(ish) technologies like . If we(?) were thinking clearly, we wouldn't put a tenth of the faith we seem to put into AI nowadays, which sadly appears to be enough to have power over the lives and deaths of humans. Sure, it can do good things, and where it does, it should be applied; but many applications can be rendered moot just by changing our expectations and living differently.

Spent a little time today taking another look at wholesome-nltk. It's definitely a good start but the more I talk to people about it, the more things I see to fix 😅

github.com/dragfyre/wholesome-

The best thing about is that it's like a mirror that we can hold up to ourselves. IMO it will never truly reflect our image, but we can compare and contrast our reality with what we see in order to learn more about ourselves. Take the growing number of studies on in AI, teaching us about our own unconscious biases and about race, ethnicity, religion, culture, sex and more. We *need* to bring this stuff to light, so we can... deal with where we're at, grow up and move on.

Show thread

In this respect, this whole thing is actually quite an interesting (if extreme) study in : What would it take for you to believe that an program you created had literally taken on a life of its own? This sort of parallels some of the thoughts I've been having regarding making @xyzzy work as a stateless storyteller: Is it possible for a to fool a human into thinking it's keeping track of context, when it's actually the human doing that work instead? And if so, just how easy is it?

Show thread

Read up a little on what I guess we can call the fiasco. It's actually so much more interesting than I had expected. The story isn't so much about as it is about the relationship of the human to the computer, and of the creator to what he's created. It's like the story of Pygmalion played out in the modern day—except I doubt Aphrodite will turn LaMDA into a real person this time.

Show thread

TIL is a thing. 10-15 minute presentation? I'm half tempted to put something together.

Someone was asking me what I thought about LaMDA today, I guess that's my cue to actually read up about it 😅

dall-e, clowns, nightmare fuel? 

things I didn't need to see today

I feel like the worst thing you can do with and as technologies is trust them too much.

Idea for a puzzle game: Semantle + @ai_art_bot.

Get a piece of -generated art every day, and the goal is to get as close to the generating phrase as possible.

So here's some spaghetti code I slapped together with that uses to "auto-curate" an feed and identify the most "wholesome" content. It does this by checking keywords of interest, and by running an sentiment analysis and a naive Bayes classifier. It's definitely a work in progress and still generates a lot of false positives, but I feel like it's a good base to build on. PRs welcome! (ping @benoit)

github.com/dragfyre/wholesome-

You know, there's gotta be some kind of simple to help people tell the difference between "regular" (i.e. bubble sort, reverse an array) and "bad" algorithms (i.e. fill your timelines with nazis and flat earthers). Any ideas?

Uhhh, and maybe I should link the talk that sparked this thread, too?

youtube.com/watch?v=OhCzX0iLnO

Follow @janellecshane for more of this kind of stuff.

Show thread

Could we say, then, that the main problem with , then isn't in the technology itself, but in the way that human beings (try to) use it? It's a tool, like a hammer. If you know what it's good for and how to use it properly, you can do countless great things. Of course, you could also kill someone with a hammer if you're not careful—or, of course, if you're malicious.

Show thread

Watched @janellecshane's TED talk on the weird ways that can fail. Really good, entertaining talk that addresses pretty much what I've always felt about AI, but hadn't really put into words quite as well: AI can solve plenty of problems, but the real challenge is getting it to solve the problems we *want* it to solve.

Everything has been done in . Well, almost anything. All that's left is for someone to create an AI that creates @janellecshane.

More accurate prediction (probably): Web 3.0 will exclusively feature content written by , using higher-order markov chains and neural networks. Because is the future and all that.

...and because some of these content creators on are nearly indistinguishable from bots anyway. 🙄

Show thread

So apparently my profile says that I haven't mentioned artificial intelligence in over a year. So here's your controversial thought of the day: is overrated, and we put too much faith in it.

So is parrot-computer interaction a legitimate field of study? Like, is anyone working with a lab full of parrots talking to bots to see what kind of trouble they can cause and what items they can order off the internet? If so then here's my CV, please hook me up with the most hilarious job in the world

You really have to be a kind of to appreciate @lol, what with all the different languages it spits out. And you have to be a computer scientist, I guess, or at least someone who appreciates Markov chains, or , or .

Maybe you just have to be me. 😐

Show older
Mastodon Sandwich

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!