![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Overnight I let my iPhone upgrade, from iOS 18.1 to 18.2, and I was sweating it. I'd checked, when upgrading from 17 to 18, if Apple Intelligence was going to be forced on me. Couldn't get a clear answer, but it seemed not, and turned out to not yet work on my very basic iPhone SE gen 3. Last night, Apple snuck inbox filtering into mail but gave me an elipsis menu switch back to my uninterfered with inbox. Fortunately, as I'd hoped, Apple Intelligence is setup to respect my Siri settings, so GPT was completely turned off.
I've been messing around with tech since the mid 1970s, using tech for learning since the dawn of the internet in Australia in 1995 and making my own tech in the mid 70s and since 2007, when I gained more financial freedom after leaving a loveless marriage. I don't need AI and I can find knowledge and learn stuff better than it can. This is not greyhaired Mr. Grumpy-Curmugeon, I have done extensive experiments with a number of AIs. I embrace tech to at least see what it can do for me. I've tried Github CoPilot, GPT, Midjourney and run local instances of GPT4All, along with Ollama2 with a general query model and the CodeLlama model. None of them answer queries of any kind with guaranteed reliability. With a search engine, I can find answers and learning faster and more reliably than any queries to AI can advance my knowledge. No, don't argue, this is my experience, it is my reality, I'm not using it wrong, I've used it the same way everybody has, with great interest and excitement, framed my queries simply and in stages, and received answers with a usefulness ranging from a "meh" to "tripping balls like Archer on a family trauma related bender." I have no place in my workflow for generative AI, it is a dead end.
I used Siri for about 3 weeks when it first rolled out, useless, intrusive and it interupted my workflows and thinking patterns. I tried it again a few years later because I've always trusted Apple to provide good new tech eventually. Nope, still useless and intrusive. So I have Siri turned off. The only thing Siri could ever do right for me was, after noisily intruding into a conversation I was having with a friend, causing me to exclaim, "Fuck OFF, Siri!" it replied, "How rude!" I'm so glad my friend witnessed this, I have a witness! Later, when I opened my phone to turn Siri off, she'd "left," all Siri settings were off. One reason I still like Apple, coder's easter eggs.
AI looks useful, but dig deeper. It trains on what you can already find using a web search. It's the mass of this data that limits what responses generative AI can give. You can't make it smarter by throwing more hardware at it. That did work to improve the user experience for a while, but the gains were way less than the cost, and now, as the AI players double down on this lost bet ("OK, it's a hardware threshold, more GPUs!"), the global carbon emmissions of data centres rise, causing brownouts on local grids and AI continues to trip balls. Because the training data is running out! Remember, you can run an AI on your laptop, with network turned off, and get just as "good" result
Again, doubling down, the "Tech Bros" are using AI generated training data. Data science literally calls this "Hapsburg AI," after the 19th Century Austrian royal family. It's data science's anser to inbreeding. I come from Tasmania, Australia's Newfoundland. We grow up learning jokes about inbreeding, because it has been an awful, awful thing in that state's history and the isolated and isolating normative culture, there, also proves that information inbreeds, too. And the data scientists speaking out against training AI on AI are not avoiding the stereotypes because the results are bearing out the analogy.
I'm not a luddite. I built a kit microcontroller at 15, the "Baby 2650" running a Signetics 2650 chip. It cost more "back then dollars" than a RaspberryPi 5 costs today, had 256 bytes of RAM and I had to solder the parts on the board. A "Pi" comes assembled. I had a succession of plastic computational toys in my early 20s. A TRS-80 "CoCo", a Micro B, access to my college's BBC B+ and Mac labs, I used these machines for many things. I taught myself basic coding on the B+, learned 2D CAD on MacDraw. I settled on Apple, even before Mac OS got a command line interface. I've had a succession of improving tech ever since. I had a personal web page in 1995 and ran Bicycle Tasmania's first web site for 4 of my 5 years actively on committee. I established local forums for my interest groups. At work I pitched iPhone and, as it improved, Android as the only field recorder my journalist colleagues would ever need. I am as big a fan of computational tech as I am of woodworking tools that help luthiers at Fender build the guitars I play.
So, when I say, "AI doesn't work." I mean it doesn't work to a point beyond uselessness beyond safety. It is a tool for the hyperwealthy to extract money from us, using our general, unquestioning fascination for new tech. Put you hand up if you have an AI subscription. Come on, own up.
Use an open source search engine to search these talking points, read the Reddits, etc. On Youtube, the best summary of what AI is useful for was a comment, on a physicist's teardown of how "Meh" AI is, reads, "CoPilot has made my coding very productive, I do my coding, it does my emails." The only replies comments that ring that true comments like that ever get are laugh emojis or none. Because they tear open the truth, and the truth is Generative Artificial Intelligence is a "Money Heist." The rish are stealing from us, faster and more often than ever. This trend to AI is a tool of Accelerationism. The philosophy that espouses those with money and power should hasten civilisation's collapse so they can build a better world. Ask yourself why OpenAI's Sam Altman is a self-confessed doomsday prepper.
So, where you can, stop using AI, especially for trivial stuff. Evolution gave you a brain. Use it! Don't be so lazy. Read books! Make things and learn stuff by using projects on the web - I don't care what, what a woordworking video on the toobs, learn how to 3D print, smelt aluminium cans into useful or beautiful things. Do what sets us apart from earlier evolutionary branches, think, design, create and be smug! Turn off Apple Intelligence and CoPilot. Don't use GPT, Llama or others. Use your brain to figure things out! I know so many people, people with more education than me, falling down the AI rabbithole, taking the bollocks as truth, and I google their query they're telling me about, go to authoritive sites like Wikipedia, and the answer they're given is 50 percent of the time wrong, the other 50 percent, they had to reframe their query at least 3 times for an answer they could have got in the first line of a pre AI google search.
AI is a hype machine, at best. Don't believe the hype.
no subject
Date: 2024-12-13 01:53 am (UTC)The test case I use with AI is tin whisker mitigation in electronics. When you soldered your Baby 2650, you didn't have to worry about your solder joints (likely made with 63% tin, 37% lead) unexpectedly growing fuzzy. Modern electronics using modern lead-free solder alloys do suddenly grow whiskers of tin, which can cause short circuits. If you ask an AI how to prevent tin whisker formation, it can tell that there are old solder alloys, and modern solder alloys; it will link "newer" with "better", and confidently but completely incorrectly instruct you to use the latest SAC404 tin-silver-copper alloy.
no subject
Date: 2024-12-13 03:55 am (UTC)no subject
Date: 2024-12-13 04:02 am (UTC)Thoughts
Date: 2024-12-27 02:38 am (UTC)I'm lucky if I can get ordinary electronics to work, I don't need something fancier that's already prone to mistakes. The poor sod doesn't need to be confused by input from a couple dozen different universes.
I have seen a few applications where AI seems to be useful in saving time. I came across one example of it generating prompts. It's not something I need, but I could see it appealing to some people. If it's replicable.
But research? Answering questions? It's terrible at that. Like lawsuit territory bad.
>> received answers with a usefulness ranging from a "meh" to "tripping balls like Archer on a family trauma related bender." <<
ROTLMAO!!!
>> I used Siri for about 3 weeks when it first rolled out <<
I can't abide listening tech. The privacy is so bad, I don't even want to be around other people using it.
>> "Fuck OFF, Siri!" it replied, "How rude!" I'm so glad my friend witnessed this, I have a witness! Later, when I opened my phone to turn Siri off, she'd "left," all Siri settings were off. One reason I still like Apple, coder's easter eggs.<<
Admittedly, that bit was brilliant.
>>Again, doubling down, the "Tech Bros" are using AI generated training data. Data science literally calls this "Hapsburg AI," after the 19th Century Austrian royal family. It's data science's anser to inbreeding. <<
Oh, that's brilliant. Must remember that description!
I found a great article about it here.
>> I'm not a luddite. <<
Meanwhile I'm living a bit south of an Amish community. While I don't draw the lines in the same places as they do, I do use their core premise: "Before adopting any new piece of technology, first determine whether it will do more harm than good. If so, do not adopt it." I like some technology, but the modern trend is moving rapidly away from what I can use at all, let alone what is actually useful.
>> "CoPilot has made my coding very productive, I do my coding, it does my emails." <<
I wouldn't let it answer unsupervised. There are already cautionary tales about that. But it could be a great timesaver for answering a flood of email you didn't care about. 1) Tell it to write 20 replies to a sample message you get a lot of. 2) Pick the best or combine them into a good reply. 3) Use an auto-reply to send that message to emails matching its category. This is likely faster than writing it for scratch, at least for people with average or lower writing speed.
Which seems to be the leading application: getting AI to do stuff that humans don't really want to do.
>> The rish are stealing from us, faster and more often than ever.<<
Sadly so.
>> This trend to AI is a tool of Accelerationism. The philosophy that espouses those with money and power should hasten civilisation's collapse so they can build a better world. Ask yourself why OpenAI's Sam Altman is a self-confessed doomsday prepper.<<
I'm more concerned when I see weathermen and scientists doing it. But well, disaster preparedness is no longer lunatic fringe, in a world where the fire season is pushing 6 months and hurricanes would be Category 7 if people were honest about it. You'd better be able to fend for yourself when the weather pitches a wobbly, because chances are nobody's coming to help.
>>Do what sets us apart from earlier evolutionary branches, think, design, create and be smug!<<
Well reasoned.
>>the other 50 percent, they had to reframe their query at least 3 times for an answer they could have got in the first line of a pre AI google search.<<
Just doing a google search often takes a lot of reframing, and it's a lot worse now that they've reduced or eliminated the codes that used to make it feasible to eliminate irrelevant answers. That will ultimately be fatal for search engines if they keep it up, because the internet is the world's biggest slushpile and is only usable if you have effective ways to find what you want amidst the mess.
Looking at AI art, I've noticed that getting coherent results requires learning how to frame a paragraph or more of rather arcane phrases. That's a skill unto itself, and it's one that not everyone is good at, but some folks get worthwhile results from.
>>AI is a hype machine, at best. Don't believe the hype.<<
It's not something I have great interest in pursuing personally. It's interesting to watch, though -- I've been observing AI in fiction and internet for many years, watching it develop.
... kinda like watching the old videos of flying machines crash and catch on fire.
Re: Thoughts
Date: 2024-12-27 03:54 am (UTC)1. how to reduce/eliminate harm inherent in a new tech, the primary harm being the ongoing digital surveillance by entities not ethically (at least) entitled in any way to surveil, even by contract provision.
2. how to distribute the index part of search, so that everybody's search contributes to a strong and accessible knowledge tree (like classic google when they won the internet in the 90s), but keep each actual search private and device local only.
3. how to create strong firewalls between a single user's "device and search interface" and the "peer network master knowledge tree" made up of all users' devices.
There are technologies for this but, because many originate in "web 3" crypto ecosystems, the only one I can think of that fits the scope of this imaginarium is gun.js, which is javascript only at the moment. Javascript breaks my brain everytime I attempt to learn it. That aside, the idea of a "graph database" makes peer-to-peer search possible and, it's "local-first," which is higher privacy than any client/server search could muster. Also, while the public knowledge tree would be slower to update than a central server, it makes search local first, making most searches secure by making personal data inaccessible to the outside world.
Sadly, I am no coder, and have no resources to pay coders at the moment. So I put the ideas into the public domain. Pay ideas forward, right?
And apologies, I do tend to run off at the mouth about tech. You won't be tested on any of this ;-)
Re: Thoughts
Date: 2024-12-27 08:51 am (UTC)Excellent idea.
>> 1. how to reduce/eliminate harm inherent in a new tech, the primary harm being the ongoing digital surveillance by entities not ethically (at least) entitled in any way to surveil, even by contract provision.<<
Yeah, America is a post-privacy society. They are going to find out why privacy was the first thing humanity had to invent in order to go from small family groups to larger clans.
>>2. how to distribute the index part of search, so that everybody's search contributes to a strong and accessible knowledge tree (like classic google when they won the internet in the 90s), but keep each actual search private and device local only.<<
Challenging, but should be doable if you focus on saving the information about where things are, not who asked about them.
>> 3. how to create strong firewalls between a single user's "device and search interface" and the "peer network master knowledge tree" made up of all users' devices. <<
And avoid problems like passing malware around.
>>Also, while the public knowledge tree would be slower to update than a central server, it makes search local first, making most searches secure by making personal data inaccessible to the outside world.<<
That might also help turn up replies that are relevant. Trying to find a local source to deliver firewood is a fucking pain in the ass.
>> Sadly, I am no coder, and have no resources to pay coders at the moment. So I put the ideas into the public domain. Pay ideas forward, right? <<
Same here. I've described any number of things that I know how they work, but not how to build one.
>> And apologies, I do tend to run off at the mouth about tech. You won't be tested on any of this ;-)
Don't worry about it. You'll get to see me going on about xenolinguistics or permaculture or any number of things. I can follow some tech conversations, I'm just limited by the fact that electronic or mechanical things tend to go haywire around me.