Six Colors
Six Colors

This Week's Sponsor

Kolide ensures that if a device isn't secure, it can't access your apps.  It's Device Trust for Okta. Watch a demo now!

It’s time for a Vision Pro check-in. How are we using it, and how should Apple sell it? Also, some EU app marketplace developers just want to go home.


By Jason Snell

Kobo Libra Colour Review: Color, but at what cost?

Kobo Libra Colour

All my computing devices, save one, have color displays. The last time I regularly used a computer without a color display was probably in the mid-1990s. The only exception is my e-reader, which—since the very first Kindle I bought—has been a black-and-white E Ink screen that excelled at the boring job of displaying text. But… what if an e-reader added color?

We’ve reached the point where E Ink technology—which is unlike normal display technology found in our phones and computers, but allows low-power reflective displays that work more like actual ink on paper—can actually display color decently and affordably. And so now I’ve spend the last few weeks with my first color e-reader, the $219 Kobo Libra Colour.

In theory, color adds a new dimension to the e-reader. Highlights can be color coded, and book covers finally appear in full color. This is especially fun when I turn off the reader and a boldly colored book cover, designed for maximum marketing appeal, appears on the device’s screen. Unfortunately, a moment later the device’s backlight turns off and the colors become muted unless the screen is in bright light.

I love e-readers, and for the last few years my e-reader of choice has been the Kobo Libra 2. It’s a small (7-inch diagonal) device that’s easy to hold, with physical page turn buttons. It’s a winner. And now, there’s one in color!

But the truth is, most of what I use an e-reader for is text on a page. Color isn’t really part of the equation. I spent some time reading a color comic book using the Libra Colour, and it worked—but it wasn’t fun. The screen is just too small to read comfortably, and the colors were muted, feeling more like I was reading on newsprint (or a very old comic book) than on a bright, modern iPad display.

You can read comics on the Kobo Libra Colour, but I wouldn’t recommend it.

And the ugly truth is that as miraculous as it is that E Ink displays can do color, the Libra Colour’s screen is actually inferior to the screen on the Libra 2. Up close, it’s clear that there’s some sort of visible background texture on the Libra Colour (sort of a yellowish-gray wash) that reduces contrast. And when I cranked the brightness up to 100% to read in bright sunlight, it was clear that the Libra 2 was brighter and clearer than the Libra Colour.

It’s hard to see, but the Kobo Libra 2 screen (left) is brighter and offers higher contrast, while the screen of the Kobo Libra Colour (right) has a patterned background that reduces contrast.

Physically, the Libra Colour is almost identical to the Libra 2. It’s a little thicker at the grip edge and there’s a different plastic texture on the back of the case (which I found more pleasant) and it’s a few grams lighter than the previous model. Unfortunately, it’s still got a recessed screen, meaning dust and hair can collect around the edges of the bezel. That’s a negative, but it makes it easy to find the edge of the display to slide your finger up and down to adjust brightness without fiddling with a more complex user interface. I’d still rather have a flush screen, though.

In terms of software, Kobo has seen fit to enable Dropbox support on the Libra Colour—it was previously only available on higher-end Kobos, not the Libra—and added support for Google Drive as well. This means it’s a lot easier to sideload books, comics, and random PDFs from your collection without having to attach the Kobo via USB-C. In practice, though, I found myself still using the Calibre app to sideload files to my Kobo unless I was really in a pinch, because Kobo’s own Dropbox import doesn’t “dress up” ePub files in any way, while Calibre has some nice plug-ins that convert generic ePubs to use some Kobo-specific extensions that improve the presentation of the books.

Color book covers are fine (when they’re in bright light), but is that enough? (Pictured: Sarah’s Kobo Clara Colour.)

The Libra Colour is not a bad e-reader, but it feels like a misstep by Kobo. Color isn’t really very necessary for reading text, and the color display offers a warmer color temperature and worse contrast. All for a $40 higher list price—though at least cloud syncing isn’t being withheld from the Libra line anymore. I wouldn’t mind the move so much—I’m sure some people want to view color comics and PDFs and would be willing to put up with the small screen, and users of the optional Kobo Stylus 2 might enjoy having different ink colors for their markup—if Kobo kept a non-color model around at a lower price. But as I write this, the Libra 2 is not available from Kobo.

If you’re a casual reader of eBooks and are barely motivated to buy a dedicated e-reader at all, the $120 base-level Kindle ($20 less if you let Amazon stick ads on it) is probably good enough, though it doesn’t have a flush screen, isn’t waterproof, and has no page-turn buttons, which I consider essential for pleasurable reading. The $130 Kobo Clara BW is similar, and has similar drawbacks. (My friend Sarah Hendrica Bickerton upgraded to the $150 Kobo Clara Colour, which was released at the same time—and she had the same issues with the screen being dimmer and off-color that I saw.) The Kindle Paperwhite also doesn’t have buttons, but it’s got a flush screen and is slightly cheaper than the Libra—$150 with ads, $170 without.

(Reader, I am filled with despair at the current state of e-reader options. More on this soon.)

In the end: I don’t mind the Kobo Libra Colour. It definitely fills a niche. But adding $40 to the price and degrading the screen quality a bit, all in the name of nice-but-not-necessary color is really frustrating. The Libra 2 was my go-to recommendation for the discerning eBook reader; the Libra Colour might still take that crown, but it’s an all-around worse value than its predecessor, and that’s really a shame.


By John Moltz

This Week in Apple: Carl’s Jr.+

John Moltz and his conspiracy board. Art by Shafer Brown.

Apple pushes back against regulators as photos come back from the dead. Lastly, I should apparently not write about streaming service bundles around lunchtime.

Not going gentle into that good night

Apple is hoping that the path of legal recourse is a two-way street.

“Apple questions validity of DOJ antitrust lawsuit in bid to dismiss case”

“Your honor, what even is a ‘lawsuit’? Dresswear for bills that have passed Congress? I ask you, have you ever heard of anything so ridiculous? Why, even the Bill in the Schoolhouse Rock video upon which our entire system of governance is founded was naked but for a ribbon and a button.”

The company is also pushing back against the EU.

“Apple fights $2B antitrust fine over Spotify complaint, challenging EU in court”

Apple was happy to pony up for previous fines, which were in the paltry hundreds of millions, but a billion here and a billion there and pretty soon Tim Cook is walking down to Apple Legal asking what the hell is going on.…

This is a post limited to Six Colors members.



Apple explains resurrected photos bug

Speaking to Chance Miller at 9to5Mac, Apple explained the issue (patched in the recent iOS 17.5.1 update) that caused old photos to spontaneously appear in some users’ libraries:

Apple confirmed to me that iCloud Photos is not to be blamed for this. Instead, it all boils to the corrupt database entry that existed on the device’s file system itself.

According to Apple, the photos that did not fully delete from a user’s device were not synced to iCloud Photos. Those files were only on the device itself. However, the files could have persisted from one device to another when restoring from a backup, performing a device-to-device transfer, or when restoring from an iCloud Backup but not using iCloud Photos.

In the release notes for 17.5.1, Apple had attributed the resurrected photos to “database corruption.” This didn’t seem unreasonable—databases can be finicky and corruption is not uncommon—but it also didn’t go far enough into explaining the issue for those who were affected. The further clarification to Miller does hold water, though, especially in terms of how photos moved from device to device. (The company did also say that the single much reported story about a photo cropping up on a device that had been erased and given to someone else was false, which also makes sense: as John Gruber pointed out, it didn’t pass the sniff test.)

Attention has understandably focused on sensitive photos that have showed up in libraries, though I think that’s more of a human bias: what kind of picture are you going to notice popping up? (More to the point: which pictures are people likely deleting the most?)1

While it’s good that Apple has now (after several days of requests) clarified the issue, this does speak to a larger point: why is the company not more proactive in talking about these issues when they come up? For example, there still doesn’t seem to be any acknowledgement of the issue that locked users out of their Apple IDs/iCloud accounts last month. Some of this is undoubtedly an issue of scale—even a problem that seems widespread might only account for a tiny fraction of Apple’s overall user base. But when issues *do *hit the broader press, it still seems like the company only responds some of the time—and though the complaints may subside when the immediate issue is resolved (even if silently), it does all contribute to the feeling that Apple’s devices and services are a bit of a black box.


  1. Personally, the pictures I delete the most are screenshots I take for writing pieces. Would I notice if one from several years ago suddenly popped up in my library? Probably not. 
—Linked by Dan Moren

by Jason Snell

Yes, Secure Erase on Mac is secure

There’s been a lot of buzz lately involving a bug in Photos that caused some deleted items to reappear in libraries, including some (apparent) misinformation that blew the entire story out of proportion. Is Apple’s “Secure Erase” feature truly insecure?

No, it’s not. Howard Oakley has the technical details about why:

As far as I can tell, claims made about securely erased devices recovering old images originate from a single post on Reddit, since deleted by the person who posted it. Although that brought a series of cogent responses pointing out how that isn’t possible, it was picked up and amplified elsewhere, under the title iOS 17.5 Bug May Also Resurface Deleted Photos on Wiped, Sold Devices, which is manifestly incorrect. Sadly, even those who should know better have piled in and reported that single, retracted claim as established fact.

Good information and a useful lesson about not believing everything you read on the Internet.

—Linked by Jason Snell

Do we multitask on our iPads? What AI uses do we actually find helpful? Plus, the email apps we use and whether we’ve experimented with cloning our voice.



By Joe Rosensteel

The Dos and Don’ts of AI at WWDC

Photo illustration of Craig Federighi introducing AI features at next month's WWDC
Picture this. (Photo illustration by Joe Rosensteel)

For the past year, pressure has really been ramping up on the tech industry to do stuff with AI. What “AI” actually means can vary, but usually refers to a large language model (LLM) chatbot that takes natural language input.

While Apple has tried very hard to remind everyone that all the stuff they do with machine learning and the neural engine counts too, the fact is that Apple is perceived as being behind, because it has no LLM chatbot of its own.

LLMs are artificial, but not really intelligent. They can be quite wrong, or simply malfunction. They are far better at conversational threads, and in understanding context than original-recipe voice assistants, like Siri. It was enough to spook Apple’s executives and lead them to begin a crash AI program.

OpenAI, Microsoft, Meta, Google—you name it. It’s a land grab. Everyone is trying to find a way around smartphone platforms, search monopolies, data brokers, ad sales, SEO, publishers, photographers, stock footage…. pretty much everything. The urgency, the sheer sweatiness of tech companies to show their AI relevance is palpable.

Apple doesn’t want anyone to see them sweat, but at WWDC they’re going to have to break out the AI buzzwords and show where they fit into the current zeitgeist. Here’s what Apple can learn from the mistakes other companies are making when it comes to demonstrating AI prowess.

Summaries and slop

Don’t show off summarizing a conversation. I know Mark Gurman suggests this might be a new feature, but every demo of it from other companies has gone over like a lead balloon. Summarization demos say one thing: “How can I more efficiently ignore the nuance and humanity of the people around me?” Also, demos of summaries are just plain boring.

Google I/O featured several instances of summarization that were not useful and borderline disrespectful. There was a theoretical conversation between a power couple and their prospective roofer. Google’s “helpful” summary said a quote was agreed to, but didn’t actually say what the quote actually was! The actual price—seems like a key element of a quote—didn’t appear until a follow-up question. It also omits all the nuance of the roofer’s interactions with the husband in the scenario. Who would trust that summary?

LLM summaries remove words, collapse context, kill tone, and neuter meaning. Busy technology executives eat these demos up, though!

Don’t demo things that snoop on a user’s calls or their device’s screen. At Google I/O, a demo displayed a fraud warning during a phone call. That means there was an AI model listening to the phone conversation. Even if that’s happening entirely on your device, it’s still unnerving that Google is now listening to the contents of my phone calls. The same goes for Microsoft’s Recall, another on-device feature that watches everything you do—so long as you forget Microsoft’s lousy track record securing people’s devices.

Under no circumstances should there be a chatbot in a conversation with real people that’s jumping in to offering to help coordinate times or issue reminders. Fortunately, Apple doesn’t ship a workplace chat platform, so we’re unlikely to get Google’s demo of “Chip,” the nosy virtual chat kibitzer shown at Google I/O. But I don’t want that bot in my iMessage threads, either.

No generative slop. Don’t show off AI-written poetry or book reports. If people ask for help writing a cover letter, show them an example of a cover letter. AI should point users to vetted and approved templates. (But there should be an AI story with Xcode at WWDC, or why even have it be about AI? It just needs to be respectful of developers’ needs, and actually useful in helping developers with their jobs.)

I think Apple already learned a valuable lesson about visual metaphor when they smashed instruments of human expression into a thin iPad, but just to reiterate: Don’t do that again.

Speaking of creation: Don’t show off images generated out of nothing but a prompt. Any generative elements should be augmented from source images or video. Show off altering aspect ratio on an image, object and lens flare removal, creating thumbnail images, sharpening, denoising, or focus effects.

Even then, keep it grounded to what a reasonable person would want to do with their photos. The Photos app doesn’t need to become Midjourney, or Stable Diffusion, and it certainly doesn’t need to use any models with opaque, legally questionable sources to augment a photo of you smiling at the beach. It should still be that photo at the end of the day.

As for partner demos, I would recommend against demonstrations from companies that have AI models that allow people to make a logo or icon for their company or product without using an artist. Under no circumstances should Midjourney, Dall-E, or any of the other generators that scraped art and photos off the internet be used as a demo. That sends the wrong message, even if it is absolutely a use case that can be demoed to show how the neural engine makes creating a logo 90% faster than on Intel.

Don’t demo video generators. These mostly scare people and impress weirdos. “Look, her hands are boiling!” They’re basically a substitute for artifacting stock footage, and Apple is not a purveyor of artifacting stock footage.

AI video tools that handle retiming, color grading, detail recovery, and noise reduction are all acceptable, especially if they can lean on Apple’s multifaceted imaging pipeline, or can use Apple’s depth data as part of the dataset in processing the footage.

For example: Apple is interested in customers shooting Spatial Video, but there are technical shortcomings with the different lenses. Show us how data can be transferred from one eye to the other to help reduce artifacts, and increase resolution. Do an easy-to-use version of something akin to Ocula.

It is possible to preserve AI/ML as a tool without having AI/ML take over the output. There should always be a kernel of reality in every demo to ground it. It should apply to real life, and not trying to compete in the crowded hallucination market.

Hey, Siri

Now that the lede is good and buried, let’s talk about Siri.

We’d all love a senior Apple exec to get on stage and issue a mea culpa before launching the new version, but it’s probably going to be something more like, “Millions of people use Siri every day, which is why we’re excited to announce Siri is even better than before.”

Unfortunately, Mark Gurman has kind of burst the bubble:

The big missing item here is a chatbot. Apple’s generative AI technology isn’t advanced enough for the company to release its own equivalent of ChatGPT or Gemini. Moreover, some of its top executives are allergic to the idea of Apple going in that direction. Chatbot mishaps have brought controversy to companies like Google, and they could hurt Apple’s reputation.

But the company knows consumers will demand such a feature, and so it’s teaming up with OpenAI to add the startup’s technology to iOS 18, the next version of the iPhone’s software. The companies are preparing a major announcement of their partnership at WWDC, with Sam Altman-led OpenAI now racing to ensure it has the capacity to support the influx of users later this year.

Baffling. I have no idea what that demo will look like, but I hope it isn’t “Showing results from ChatGPT on your iPhone” and there’s a big modal window of ChatGPT output.

It is worth noting that not everyone is enamored with ChatGPT, despite the enthusiasm over the features ChatGPT has.

Apple certainly won’t be demoing the imposter Scarlett Johansen voice from OpenAI at WWDC like OpenAI did at their spring event. You know, on account of them being sued, and all.

That same OpenAI spring presentation had perhaps one of the best demos of an LLM voice interface I’ve seen where one presenter spoke in English, and the other spoke in Italian, and ChatGPT 4o acted as live translator. That was a great demo, and translation is definitely one of the areas Apple is playing catch-up in already. It’s not rumored to be a feature, but it would be a good demo.

Google demoed integration with Google Workspace (Drive, Sheets, Gmail, Gchat (lol), etc.) and Apple should show that Siri can pull in information and context from Mail, Messages, Calendar, Photos, Reminders, etc. Ideally, it would be great to work with apps beyond that, but it needs to be able to plug into at least that data.

That means there needs to be a privacy interface for what apps Siri can access, especially if it is relaying it to a third party, and a privacy story about how Apple won’t be looking into every app on your device if you don’t want it to.

I fear that Apple simply won’t address anything but ChatGPT basics shoved into Siri windows. Which is possibly worse than continuing to work quietly on whatever the hell it is they’re working on. I’ll still run through some examples I’d love to see:

Show us someone asking a HomePod or Watch to do something, and instead of saying it can’t, it’ll execute it on your iPhone. Tell us the story about how Siri is secure and functional across devices under your Apple ID.

Demo someone telling Siri to play something on TV. Then asking their Apple Watch to “pause the TV”. Where Siri can know “the TV” is the one I started playing something on (and my iPhone is near based on Bluetooth), even if there are many TVs attached to my Apple ID.

Put on a little show of someone asking Siri where something is in the interface, or how they can do something. “Hey Siri, where are my saved passwords?” It whisks the person right to the Passwords section of Settings. “Hey Siri, I turned down the brightness all the way but it’s still too bright, what can I do?” and it surfaces the Reduce Whitepoint control. Conversationally, “How can I only turn on Reduce Whitepoint late at night?” and it offers a Shortcut based around the sleep and wake-up times.

Demo someone using new Siri with CarPlay, an essential application of Siri, where someone can conversationally talk to Siri to “Play ‘Mona Lisa Overdrive'” and then follow that up with “Play the rest of the album” and it’ll queue up the tracks after instead of doing something completely random like it does now.

Absolutely demo someone pausing music on their Mac, and telling their HomePod to “play what I was last listening to” and it can go resume playback on the HomePod exactly as if you had just hit play on your Mac.

Demo Siri being able to understand what’s currently on-screen when asked. “Hey Siri, who is the actor in this video?” Then conversationally follow that up with “What have I seen them in recently?” Where it could look through what was recently watched through the TV app and check that against the roles that actor has played. That’s not putting anyone out of a job (Well, except Casey. Sorry, buddy.)

Above all else, demo to the audience that when Siri doesn’t know what to do, it’ll ask. Show us a graceful failure state that reassures people how Apple can behave responsibly.

Let me illustrate what not to do with a recent interaction I had with Current Siri:

Me: “Play the soundtrack for The Last Starfighter
Siri: “Here’s The Last Starfighter
[Opens TV app on iOS and starts playing The Last Starfighter from my video library.]
Me: “Play The Last Starfighter soundtrack.”
Siri: “Here’s Dan + Shay”
[Music app starts playnig Dan + Shay “Alone Together”.]
Me: “Play The Last Starfighter Original Motion Picture Soundtrack.”
Siri: “Here’s The Last Starfighter by Craig Safan.”

It seems, however, that nothing is really rumored along these lines. Oh well, guess, I’ll listen to some more Dan + Shay!

Ethics? Anyone?

A very troubling aspect of these rumors is Apple partnering with OpenAI. They didn’t ethically buy rights to use information to train their models, just like they didn’t take Scarlett Johansen’s no for an answer. They’re in active lawsuits with various media companies.

Even companies that have struck a deal with OpenAI—like Stack Overflow, and Reddit—are getting bought off after their sites were already being scraped. Users, who generated all the value in the site, can’t even delete their posts in protest.

Is Apple going to endorse OpenAI by giving them a thumbs up and slotting them into their next operating system releases without comment? They absolutely shouldn’t show anyone from OpenAI in their WWDC presentation, especially not Sam Altman.

There’s an easy way to draw a parallel to Google. Companies sue Google all the time over rights, and Apple still includes Google.

Of course, they are taking money from Google to be the default search engine on iOS, and then trying to have Safari insert Spotlight suggestions to pretend there’s a privacy angle. That Google deal now means that the default search will go through Google’s AI Overview. So Apple is already going to endorse Google’s approach to AI too, even if they don’t strike a deal for anything more.

And let’s not forget the ethics of Apple’s climate pledge. There should be a point in the WWDC keynote where Apple communicates how they can harness AI and still stay on target for their climate goals. That probably seems like a small thing, but people are getting pretty hand-wavy about maintaining their commitments while also putting their models to use.

Regardless of what happens, I suspect there will be plenty of disappointment and outrage to go around in the aftermath of WWDC. These are the times we live in. I just hope Apple takes some lessons from that thing with the hydraulic press and the iPad and doesn’t step in it too badly, just to show that they’re keeping up with the AI hype from the bozos of the tech world.

[Joe Rosensteel is a VFX artist, writer, and co-host of the Defocused and Unhelpful Suggestions podcasts.]


By Jason Snell for Macworld

The future of the iPhone is coming, but it’ll cost you dearly

It’s kind of hard to believe today, given how wildly successful the iPhone has been, but in the product’s early days there was only a single iPhone model for sale. It wasn’t until ten years ago, in 2014, that Apple introduced two different iPhone models, the iPhone 6 and iPhone 6 Plus. But that opened the floodgates, and Apple has spent the last ten years trying to find the right combination of new iPhone models to maximize the money it makes from its most important product.

If reports are true, next year Apple’s going to be switching things up again, dropping the iPhone Plus for a dramatic new model. After the discontinuation if the iPhone Mini after two years and the (apparent) death of the iPhone Plus after three, what does Apple have up its sleeve for 2025?

Continue reading on Macworld ↦


by Jason Snell

‘What If?’ for Vision Pro to arrive next week

What If training

The Watcher is watching with a Vision Pro.

Today Marvel and ILM dropped the trailer for the forthcoming “What If?” immersive story, which was just announced a couple of weeks ago, and (surprise!) is launching next Wednesday in the visionOS App Store as a free app.

Marvel and ILM say the immersive story is about an hour long, and is directly connected to the “What If?” animated series on Disney+, which itself is a multiversal riff on various Marvel Cinematic Universe movies. (Lest you think that this multiverse stuff is new, “What If?” is based on a comic that I absolutely devoured when I was a kid. I still have my issue #1 in a box—what if Spider-Man joined the Fantastic Four?—not too far from where I’m typing this.)

The story will feature the cosmic being The Watcher asking you, the Vision Pro user, for help in facing off with dangerous “variants,” alternate-universe versions of various Marvel characters. Sorcerer Supreme Wong will instruct players on how to cast spells (presumably with the Vision Pro’s hand tracking features) and use the Infinity Stones. Versions of characters such as Thanos, Hela, the Collector, and Red Guardian will appear.

Marvel and ILM describe the app as including both full virtual environments as well as mixed-reality scenarios which involve the real world around players. It sounds very much like an interesting mash-up of interactive and animated elements. I’m very much looking forward to seeing how the “What If?” animation style translates into 3-D. I guess my wait will be over in a week!

—Linked by Jason Snell

Myke returns to the show to give his personal iPad Pro review, and we discuss Apple’s accessibility announcements, worrying Apple AI reports, and some mystifying rumors about future iPhones for this year and next.


Why the “great rebundling” isn’t the return of the cable bundle; recalibrating how talent is paid for streaming shows; Paramount twists in the wind; Spulu gets a real name; the NFL makes a deal with Netflix; and Diamond Sports meets with skepticism.


By John Moltz

This Week in Apple: Going hard on the software

John Moltz and his conspiracy board. Art by Shafer Brown.

The new iPad Pros are astounding, fast, and durable machines that run an operating system. In other news, the new chatbots are here and we’ll never hear the end of it.

It’s a real good news/bad news situation

If you’ve been living under a rock for the past week… well, I have a lot of questions. First, I hope this is just a hatch-type situation where the rock is simply hiding the door to a silo or bunker because otherwise that sounds really uncomfortable. Unless it’s a very small rock, in which case why bother? We can see you. But I don’t want to get distracted because we have Apple news to talk about, so let’s circle back around to this living-under-a-rock situation later.

The Apple web’s main discourse this week revolved around reviews of the new iPad Pros (tl;dr very impressive, hecka fast, lighter than air because they are literally lighter than the iPad Airs) and thoughts about the overall iPad experience.…

This is a post limited to Six Colors members.



by Jason Snell

It’s forgery all the way down

Yet another incredible story on Cabel Sasser’s blog, this one about a supposedly vintage Apple Employee badge:

Wow. Someone was selling Apple Employee #10’s employee badge?! What an incredible piece of Apple history! Sure, it’s not Steve Jobs’ badge (despite the auction title), but there are only so many of these in the world — especially from one of the first ten employees.

How do you cover for a forged document? More forged documents!

—Linked by Jason Snell


Search Six Colors