this comment traveled in time from 2001 lol
this comment traveled in time from 2001 lol
simply not true. they’re no angels or open source champions, but come on.
if a player brought it up as an issue, i’d probably go back on it. i’m an easy DM. i like Rule of Cool, but it’s about everyone having a good time ultimately, whatever that means to them
sure it does. it won’t tell you how to build a bomb or demonstrate explicit biases that have been fine tuned out of it. the problem is McDonald’s isn’t an AI company and probably is just using ChatGPT on the backend, and GPT doesn’t give a shit about bacon ice cream out of the box.
even though those are rules as written, i like to honor the crits, with a bit of nuance. if you’re super stealthy and roll a 1, maybe it makes a small noise but doesn’t cause an alarm. if you’re dumping strength on your wet noodle wizard, maybe you’re able to move that heavy thing an inch on a 20. it’s always situational though. people get excited to see a crit, and i think it makes it more fun.
ah yeah. maybe less well known, but i had a dev kit from Qualcomm that came with Ubuntu
not likely. i think it requires a lot of systems working together
always? Android runs a linux kernel, and they support all kinds of embedded systems that run Linux.
pretty standard compared to OSs like Android and iOS. i think the mobile OSs, at least recently, have done better at this; they don’t ask for permission until they need it. want to import bookmarks? i need file system access for that. want to open your webcam? i need device access. doing it all upfront leads to all the problems mentioned in this thread: unclear as to why, easy to forget what access you’ve given, no ability to deny a subset of options, etc.
nushell is excellent for dealing with structured data. it’s also great as a scripting language.
not sure what you mean by expensive. i run language models on my laptop that are pretty good at this type of task. and, yes, these models are infinitely easier and cheaper ultimately than trying to change the human proclivity for attention seeking behavior.
you’ve not seen the type of email chains i get at work. personally i think it should be illegal to respond-all to an email chain with hundreds of people with “Great job team!!! 🎉”. but it would be great to have a LM to read it near instantaneously for me to be like “oh yeah there was a product release and here’s a few relevant metrics”. doesn’t matter if it’s 100% in on every subtle detail, and a decent summary could tell me where or if i even should dig into details.
yeah i see that too. it seems like mostly a reactionary viewpoint. the reaction is understandable to a point since a lot of the “AI” features are half baked and forced on the user. to that point i don’t think GNOME etc should be scrambling to add copies of these features.
what i would love to see is more engagement around additional pieces of software that are supplemental. for example, i would love if i could install a daemon that indexes my notes and allows me to do semantic search. or something similar with my images.
the problems with AI features aren’t within the tech itself but in the surrounding politics. it’s become commonplace for “responsible” AI companies like OpenAI to not even produce papers around their tech (product announcement blogs that are vaguely scientific don’t count), much less source code, weights, and details on training data. and even when Meta releases their weights, they don’t specify their datasets. the rat race to see who can make a decent product with this amazing tech has made the whole industry a bunch of pearl clutching FOMO based tweakers. that sparks a comparison to blockchain, which is fair from the perspective of someone who hasn’t studied the tech or simply hasn’t seen a product that is relevant to them. but even those people will look at something fantastical like ChatGPT as if it’s pedestrian or unimpressive because when i asked it to write an implementation of the HTTP spec in the style of Fetty Wap it didn’t run perfectly the first time.
a lot of things are unknown.
i’d be very surprised if it doesn’t have an opt out.
a point i was trying to make is that a lot of this info already exists on their servers, and your trust in the privacy of that is what it is. if you don’t trust them that it’s run on per user virtualized compute, that it’s e2e encrypted, or that they’re using local models i don’t know what to tell you. the model isn’t hoovering up your messages and sending them back to Apple unencrypted. it doesn’t need to for these features.
all that said, this is just what they’ve told us, and there aren’t many people who know exactly what the implementation details are.
the privacy issue with Recall, as i said, is that it collects a ton of data passively, without explicit consent. if i open my KeePass database on a Recall enabled machine, i have little assurance that this bot doesn’t know my Gmail password. this bot uses existing data, in controlled systems. that’s the difference. sure maybe people see Apple as more trustworthy, but maybe sociology has something to do with your reaction to it as well.
i mean, you’re right. i’m just saying it’s a little silly to ship a Python interpreter when there are easier, better supported ways to do the same thing.
looks like tesseract provides C bindings which are probably being utilized in those apps.
people generally probably hate the iOS integration just because it’s another AI product, but they’re fundamentally different. the problem with Recall isn’t the AI, it’s the trove of extra data that gets collected that you normally wouldn’t save to disk whereas the iOS features are only accessing existing data that you give it access to.
from my perspective this is a pretty good use case for “AI” and about as good as you can do privacy wise, if their claims pan out. most features use existing data that is user controlled and local models, and it’s pretty explicit about when it’s reaching out to the cloud.
this data is already accessible by services on your phone or exists in iCloud. if you don’t trust that infrastructure already then of course you don’t want this feature. you know how you can search for pictures of people in Photos? that’s the terrifying cLoUD Ai looking through your pictures and classifying them. this feature actually moves a lot of that semantic search on device, which is inherently more private.
of course it does make access to that data easier, so if someone could unlock your device they could potentially get access to sensitive data with simple prompts like “nudes plz”, but you should have layers of security on more sensitive stuff like bank or social accounts that would keep Siri from reading it. likely Siri won’t be able to get access to app data unless it’s specified via their API.
no need for Python. there’s a Google SDK, ML Kit, that will do the heavy lifting on this. if that’s not acceptable, TensorFlow, PyTorch, and ONNX support Android, albeit not as nicely integrated.
your image processing pipeline will be imageSource -> RGB encoding -> OCR -> profit. your OCR just needs an RGB encoded image. doesn’t matter if that’s a JPEG or YUV video feed at the source.
as for if there’s an app that fits OP’s exact use case, dunno.
i guess the rare thing is the public commitment, but Apple has generally had a good track record for updates compared to its Android counterparts, who have previously failed to meet their goals or set laughable goals like 2 years.
same as with crypto. the software community started using GPUs for deep learning, and they were just meeting that demand
i’d say generally you’re right to keep them so that you don’t have to install them again on updates. depends on how heavy the dependencies are, how often you update, if you’re planning on removing the package soon, etc. it’s gonna be tough to make a recommendation without knowing your situation, but for me personally i’d be on the lookout for a binary distribution or other more efficient install options. barring other options i’d probably keep them as long as they aren’t overriding another system library.