Content about ai

Rare bees kill Meta’s nuclear-powered AI data center plans (popsci.com)

Meta and many other tech companies continue to face energy crunches thanks to their recent AI investments. Earlier this year, Microsoft confirmed its greenhouse gas emissions rose an estimated 29 percent since 2020 due to new data centers specifically “designed and optimized to support AI workloads.” Google has also calculated its own pollution generation has increased as much as 48 percent since 2019, largely because of data center energy needs.

The energy demands of AI, and the resulting environmental impact of the technology, are truly staggering. And now environmental laws, in the form of a yet-to-be-identified protected species of bee, seemingly stand in the way of efforts to tap into nuclear power to reduce those impacts. This is quite the conundrum.

Maybe AI will provide a solution.

tags: tech ai

posted by matt in Monday, November 4, 2024

NVIDIA is 'undervalued' amid artificial super intelligence race: Softbank's Son (yahoo.com)

Artificial super intelligence will be 10,000 times smarter than a human brain and will exist by 2035

10,000x in 10 years. The AI hype train may have a new captain.

tags: tech ai

posted by matt in Tuesday, October 29, 2024

AI Slop Is Flooding Medium (wired.com)

Stubblebine’s argument—that it doesn’t necessarily matter whether a platform contains a large amount of garbage, as long as it successfully amplifies good writing and limits the reach of said garbage—is perhaps more pragmatic than any attempt to wholly banish AI slop. His moderation strategy may very well be the most savvy approach.

It also suggests a future in which the Dead Internet theory comes to fruition. The theory, once the domain of extremely online conspiratorial thinkers, argues that the vast majority of the internet is devoid of real people and human-created posts, instead clogged with AI-generated slop and bots. As generative AI tools grow more commonplace, platforms that give up on trying to blot out bots will incubate an online world in which work created by humans becomes increasingly harder to find on platforms swamped by AI.

Medium is taking an interesting approach to handling “AI slop” as it’s called. The platform isn’t banning it per se, but striving to reward it with zero views. I think I see a lot of it over there, so I think some of the slop is getting some views. I generally like Medium.

tags: tech ai

posted by matt in Monday, October 28, 2024

Perplexity blasts media as ‘adversarial’ in response to copyright lawsuit (theverge.com)

Perplexity, in its response today, argues that news organizations like News Corp that have filed lawsuits against AI companies “prefer to live in a world where publicly reported facts are owned by corporations, and no one can do anything with those publicly reported facts without paying a toll.”

The difference between facts and facts, as reported by…is the key here. Media companies don’t own facts. They do, however, own the intellectual property rights that protect their reporting of the facts. The question the law needs to answer is whether AI training on their reporting of the facts violates those rights or not. It’s not clear cut, and courts will wrestle with this question for years.

And, yes, it’s a lawsuit…which is adversarial by nature.

tags: law copyright ai

posted by matt in Thursday, October 24, 2024

Penguin Random House underscores copyright protection in AI rebuff (thebookseller.com)

This is great to see, but I’m not sure it does much to prevent AI companies from training on newly published books. I can’t imagine they’re scanning printed books for or buying ebooks to plug into model training. Maybe they’re scraping copies available on rogue sites, which would already be a copyright infringement.

It certainly feels good, though.

The new wording states: “No part of this book may be used or reproduced in any manner for the purpose of training artificial intelligence technologies or systems”, and will be included in all new titles and any backlist titles that are reprinted.

tags: ai copyright law

posted by matt in Saturday, October 19, 2024

The Data That Powers A.I. Is Disappearing Fast (nytimes.com)

Those restrictions are set up through the Robots Exclusion Protocol, a decades-old method for website owners to prevent automated bots from crawling their pages using a file called robots.txt.

Robots.txt for the win.

"Major tech companies already have all of the data," she said. "Changing the license on the data doesn't retroactively revoke that permission, and the primary impact is on later-arriving actors, who are typically either smaller start-ups or researchers."

"...that permission."

AI and tech companies in general have been gaslighting everyone for years now, skipping right past the question of whether the use of publicly available information for training is copyright infringement or not. This is not a settled question, legally, and their continued efforts to portray it as such is almost certainly intentional and orchestrated.

Mr. Longpre said that one of the big takeaways from the study is that we need new tools to give website owners more precise ways to control the use of their data. Some sites might object to A.I. giants using their data to train chatbots for a profit, but might be willing to let a nonprofit or educational institution use the same data, he said. Right now, there's no good way for them to distinguish between those uses, or block one while allowing the other.

Yes, yes, and yes. Let's add more granular control to the Exclusion protocol, somewhere between specific bots (which currently exists) and specific content (which also exists). Something like the ability to exclude bots crawling for a certain purpose (training an AI model v. updating a search index), or bots owned or operated by a certain type of entity (commercial entity v. non-profit, or even big tech v. small shop). Implementing any of these on a technical level would require bot operators to accurately disclose information about their bot, purpose, and entity. Seems like the province of Congress and a bit of a mountain to climb. But, figuring all of this out would certainly empower content creators.

tags: tech web ai

posted by matt in Saturday, July 20, 2024

McDonald’s pauses AI-powered drive-thru voice orders (engadget.com)

IBM has given us confidence that a voice ordering solution for drive-thru will be part of our restaurant’s future.

To be clear, McDonald's isn't walking away from "voice ordering" technology altogether. It looks like they're walking away from IBM with enough confidence to make a final decision on fully implementing it by the end of the year. Does someone have a better solution than the IBM system? Is McDonald's going to roll its own?

Who knows, but it seems like AI will be handling fast food ordering in the not too distant future. This could actually be a great "real people" test for AI – we all know what it's like to have a drive-through order get messed up, for whatever reason. It's frustrating, and having the restaurant fix the mistake is usually not worth the time and effort of driving back. Will AI improve order accuracy? If it does, "real people" will notice and AI will earn a victory in the public eye. Real people will also notice if it doesn't, or if it makes things worse. Either of those situations would be a major loss for AI.

Do you want fries with that?

tags: tech ai

posted by matt in Monday, June 17, 2024

Apple Vision Pro sales will start on February 2 (appleinsider.com)

The idea of spatial computing is intriguing, but I’m not interested until using it requires no more than a pair of eyeglasses, which I’ve worn every day for decades now. I’m glad some will adopt the early ski mask versions, though. We need them to get to the eyeglasses stage of spatial computing.

I’m happy to wait.

tags: tech ai apple spatialcomputing

posted by matt in Monday, January 8, 2024

WWDC: Two of AI’s biggest PR issues play to Apple’s strengths - 9to5Mac (9to5mac.com)

Unlike its competitors, Apple has strengths in two areas that are currently PR headaches for AI: privacy and the environment.

Another strength of Apple - coming from behind in a race to eventually lead the pack. They've done this in category after category over the years - laptops, MP3 players, Mobile phones. They might be in the process of doing it again with virtual reality / augmented reality / spatial computing.

Will they do the same come-from-behind-to-eventually-lead with AI? Possibly.

I do believe they're going to define a uniquely Apple lane in the AI race, and I'm happy about that. Privacy is everything, and Apple is the company I trust the most in that regard.

Monday should be interesting.

tags: apple tech ai

posted by matt in Thursday, June 6, 2024

The Raspberry Pi 5 gets an AI upgrade (zdnet.com)

When connected to a Raspberry Pi 5 board running the latest Raspberry Pi OS, the NPU is automatically available for AI computing tasks. The AI module also has direct access to the Raspberry Pi's camera software stack and works with both first-party and third-party cameras. The NPU allows the Raspberry Pi 5 to perform AI tasks such as object and facial recognition, human pose analysis, and more. Using an NPU frees up the Raspberry Pi 5's CPU, allowing it to focus on other tasks, making your projects more efficient and powerful.

The People's AI needs hardware and the Raspberry Pi just jumped in with both feet.

tags: tech ai

posted by matt in Wednesday, June 5, 2024

Intel CEO Takes Aim at Nvidia in Fight for AI Chip Dominance (yahoo.com)

“Unlike what Jensen would have you believe, Moore’s Law is alive and well,” he said, stressing that Intel will have a major role to play in the proliferation of AI as the leading provider of PC chips.

I found myself thinking of the death cart scene in Monty Python and the Holy Grail as I read this. The frail old man in the scene pleads "I'm not dead" in hopes of avoiding being flung onto the pile of corpses on the cart. His relative bribes the worker, who then clubs the old man to death. Ultimately, the relative slumps the dead man onto the pile and life goes on.

It feels like Intel is slung over the shoulder of the chip industry right now, claiming it's not dead. Hopefully it avoids the club and the old man finally gets to take that walk he mentioned.

tags: tech ai montypython

posted by matt in Tuesday, June 4, 2024

Adobe scolded for selling ‘Ansel Adams-style’ images generated by AI (theverge.com)

“We don’t have a problem with anyone taking inspiration from Ansel’s photography,” said the Adams estate. “But we strenuously object to the unauthorized use of his name to sell products of any kind, including digital products, and this includes AI-generated output — regardless of whether his name has been used on the input side, or whether a given model has been trained on his work.”

Don't forget, there's a trademark side to the sale of AI-generated images. The estate of Ansel Adams hasn't forgotten this, and they recently reminded Adobe about it, too. Adobe's "we have systems..." response sounds just like every other tech company on such issues. I'm sure they'll tout the "challenges of scale" at some point.

tags: ai law trademark photography

posted by matt in Monday, June 3, 2024

Scarlett Johansson was 'shocked' by OpenAI 'Sky' voice (mashable.com)

Prior to Johannson's statement, Open AI made this claim in a post Sunday: "We believe that AI voices should not deliberately mimic a celebrity's distinctive voice—Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice."

They left out the part where they intentionally selected the actress who had a voice that sounds just like Johannson's.

tags: ai

posted by matt in Monday, May 20, 2024

iOS 18 to Use AI to Summarize Notifications, Add to Calendar, and More (macrumors.com)

He also noted that a ChatGPT-like chatbot will be noticeably absent from Apple's upcoming AI features. Apple executives are said to have admitted that they're "playing catch-up" internally.

Sounds like we'll get some baby steps with AI in iOS 18. I look to see Apple turn its slower pace on AI into a positive at the upcoming WWDC, touting guardrails and privacy. I think that's a winning strategy, btw. Non-techie, "regular people" are intrigued by chatGPT right now, but they're also concerned and a little afraid. I think a turtle that provides comfort and allays fears wins this race, and I think that's exactly what Apple intends to do.

tags: ai tech apple

posted by matt in Sunday, May 19, 2024

It’s the End of Google Search As We Know It (wired.com)

"The paradigm of search for the last 20 years has been that the search engine pulls a lot of information and gives you the links. Now the search engine does all the searches for you and summarizes the results and gives you a formative opinion."

AI is about to take us from the search results era to the search opinion era.

tags: ai google

posted by matt in Tuesday, May 14, 2024

OpenAI launches GPT-4o in time for rumored iOS 18 Apple deal - 9to5Mac (9to5mac.com)

WWDC is going to be quite interesting this year, I think. Apple has been touting its focus on privacy lately, so an "on device" aspect to its AI feels natural. But it's also working with OpenAI, so a connected aspect must be in play, too. Will users have options? In a global setting...at each interaction?

tags: tech ai apple

posted by matt in Monday, May 13, 2024

OpenAI and Stack Overflow partner to bring more technical knowledge into ChatGPT (theverge.com)

OpenAI will have access to Stack Overflow’s API and will receive feedback from the developer community to improve the performance of AI models. OpenAI, in turn, will give Stack Overflow attribution — aka link to its contents — in ChatGPT.

Interesting. Stack Overflow is the single best repository for coding knowledge on the web. Hard to believe the company is only receiving attribution for giving OpenAI access to its mountain of content.

tags: ai

posted by matt in Monday, May 6, 2024

Evaluation of ChatGPT’s responses to information needs and information seeking of dementia patients - Scientific Reports (nature.com)

[L]arge language models such as ChatGPT can serve as valuable sources of information for informal caregivers, although they may not fully meet the needs of formal caregivers who seek specialized (clinical) answers. Nevertheless, even in its current state, ChatGPT was able to provide responses to some of the clinical questions related to dementia that were asked.

I suspect this is true for medical information on the web, too — good for most of us, incomplete for medical folks.

tags: science ai chatgpt

posted by matt in Saturday, May 4, 2024

Apple's Tim Cook teases AI ambitions in latest earnings call (appleinsider.com)

"We believe in the transformative power and promise of AI, and we believe we have advantages that will differentiate us in this new era, including Apple's unique combination of seamless hardware, software, and services integration, groundbreaking Apple Silicon with our industry-leading neural engines, and our unwavering focus on privacy, which underpins everything we create."

Apple is arriving late to the generative AI party and is bringing a different approach with it. Sound familiar? This is how the company came to dominate portable music players, smartphones, earbuds, tablets, laptops, etc. The strategy doesn't always work (think cars), but I'm betting it will in AI.

tags: tech ai apple

posted by matt in Thursday, May 2, 2024

Scripting News: Sunday, September 10, 2023 (scripting.com)

"And it tried to stop us in no man's land. Insane. Anyone seeing this behavior would have thought I was drunk. So next time something like that happens if I'm in FSD mode, I'm going to cancel it by turning the steering wheel slightly and taking over fully."

I won't say this happens regularly, but it certainly has happened to me several times over nearly five years of Tesla ownership. You always have to be ready to take over control of the car in FSD in an instant. It is nowhere near being a license to ignore the road, traffic, and surroundings. I hope to see that someday, but we're not there just yet.

tags: tesla cars ai

posted by matt in Sunday, September 10, 2023

Google will soon require disclaimers for AI-generated political ads (theverge.com)

"Starting in November, Google says advertisers must include a disclosure when an election ad features 'synthetic content' that depicts 'realistic-looking people or events.'"

Fine print will never be the same.

tags: ai

posted by matt in Wednesday, September 6, 2023

US Supreme Court rejects computer scientist's lawsuit over AI-generated inventions (yahoo.com)

Thaler told the Supreme Court that AI is being used to innovate in fields ranging from medicine to energy, and that rejecting AI-generated patents "curtails our patent system's ability - and thwarts Congress's intent - to optimally stimulate innovation and technological progress."

Congress can change this at any time. I bet we'll have Congressional hearings on AI inventorship over the next few years, which will be fascinating. For me, at least.

tags: law patents ai supremecourt congress

posted by matt in Monday, April 24, 2023

Michael Schumacher's family taking legal action over A.I. interview (cnbc.com)

A strapline added: "it sounded deceptively real”. Inside, it emerged the quotes had been produced by AI."

This is interesting. Sounds like the magazine didn't hide the fact that the interview was AI generated (although they likely deemphasized it). The magazine essentially used Schumacher's name and his likeness without his permission. I suspect they'll regret the "deceptively real" descriptor.

tags: ai law

posted by matt in Thursday, April 20, 2023

OpenAI's ChatGPT Blocked In Italy: Privacy Watchdog (barrons.com)

"...no legal basis to justify the mass collection and storage of personal data for the purpose of 'training' the algorithms underlying the operation of the platform."

Italy is taking an interesting angle on guard railing AI—privacy.

tags: tech ai

posted by matt in Friday, March 31, 2023

Midjourney ends free trials of its AI image generator due to 'extraordinary' abuse | Engadget (engadget.com)

"Midjourney is putting an end to free use of its AI image generator after people created high-profile deepfakes using the tool."

And they're surprised by this? Really? Give me a break.

tags: tech ai

posted by matt in Friday, March 31, 2023

I've played around with ChatGPT a bit over the last few days. I'm wondering how it might be useful for Daystream. I can't get past the basic inaccuracies in the system, though.

For example, today I asked it to "Describe some events or happenings that occurred on March 29, 1983." I thought API calls using a prompt like that might be an interesting way to generate user-independent content for historical days. I'm not sure why I picked 1983, but 40 years ago today seemed like a decent test. I was 13.

ChatGPT quickly gave me a list of seven events that, according to the AI, "occurred on March 29, 1983." The first event on the list caught my eye because I have specific memories of it: "The final episode of the television series 'MAS*H' aired on CBS, drawing a record-breaking 125 million viewers in the United States."

Unfortunately, ChatGPT got this one wrong. Basic web research using Wikipedia and IMDB reveals that the final episode of MASH, Goodbye, Farewell and Amen, actually aired on February 28, 1983, 40 years and 1 month ago.

This is an easy one to get right, too. The last episode of MASH is generally considered to be one of the most-watched scheduled television episodes of all time. It's the GOAT of episodic television. If AI got details wrong on something that is so easy to verify, what might it get wrong on things that aren't as easy to check? Or that can't be checked? What about a historical event or item that has such a low level of general interest that people only touch its details once a generation, once every other generation, or less? Human verification of AI generated historical content isn't just something that should be done, it's something that must be done to avoid a quiet rewriting of the details of human history. Anything that isn't human-verified should be labeled as such, and treated accordingly.

ChatGPT, and AI generation of content generally, still intrigues me and I think there might be a place for it in Daystream at some point. But, in light of errors like this that are revealed with basic fact-checking, I currently have no confidence in using it to assert that something actually happened on a particular day in the past, or that a list of various things occurred on a particular day.

tags: daystream tech ai

postposted by matt in Wednesday, March 29, 2023

Wozniak, Musk & more call for 'out-of-control' AI development pause | AppleInsider (appleinsider.com)

"AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts," it continues. "These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt."

We need Asimov's Laws of Robotics for AI. Frankly, I'd feel better if that came in the form an actual law, not an agreement among companies. An international treaty would be best.

tags: tech ai law policy

posted by matt in Wednesday, March 29, 2023

AI has cracked a key mathematical puzzle for understanding our world (technologyreview.com)

I don't fully understand the math involved, but I appreciate the significance of the advance made here. And the potential practical applications are interesting - the author sees a role for it in weather predictions...and climate change

tags: math ai weather climatchange

posted by matt in Thursday, November 5, 2020