I used to really appreciate Slack as an app. The engineering seems impressive, but the constant iteration of features and layout changes typifies the "churn" where a perfectly good app could be left perfectly well alone -- and it would be perfect. Yet instead it is endlessly modified and crowded by new "features."
I really wish providing the ability to toggle any new feature off were an industry standard.
I know this has been said before, but...it's nearly 2024 and 99% of major companies, including tech industry, still can't make a mobile-friendly site (or won't)?
The learning curve for this was in ~2010. I have to believe it's largely intentional -- possibly to drive people away from sites and towards apps -- but many of these truly terrible, nearly unusable sites don't offer apps, so it just becomes a major company that can't hack together device compliance.
Maybe there's more to it?
Of course GPTs do code better than math, and images and written word better than either.
It's so much easier to BS in English than to BS in code, and even more so than in math.
Math either coheres or does not. Code either works or does not (but has fudge factor, sometimes, in interpretation).
Language and art OTOH are imaginative, and accuracy is only one facet. There's wiggle room.
We bring our human motivations, and the fuzziness of diffusion models takes us for an interpretive ride. #ai
Burning Diffuse from Both Ends
Part one of an unknown length series
If you stumbled onto this article and expect to learn something of AI in a traditional fashion, you might spend your time better elsewhere (even if that's another article on this blog).
Usually if I'm going to bother writing and posting a blog item, I usually want it to be worth reading to all within
Been exploring the use of AI tools, how they can fit into life organically (I know, same as everyone).
Getting some interesting (to me) insights in the ways that an AI tool can act as labor-saving device, which works best when the AI is not being crammed into a role it doesn't suit.
Example: NovelAI still feels iffy as writer / storyteller (this seems a common opinion) but finds an excellent niche in breaking me past writer's block, and iterating on ideas -- a companion, not a solution itself.
AI Coder Phantasm
The future of coding itself is at stake seems to be the message of many articles and discussions in recent months. Certainly anyone whose career lies in software development and related fields has reason to ponder the significant leaps forward of artificial intelligence in this area. From GitHub's CoPilot as a VS plugin to prodding ChatGPT to churn out whole compon
I know, it’s just a “Someone Is Wrong On The Internet” piece, but that NYTimes piece about Why Signal Is Bad And Privacy Is Bad needs to be refuted in a compact explainer, so here’s mine: https://www.tbray.org/ongoing/When/202x/2022/12/29/Privacy-is-OK
Many technological advancements prove beneficial. Adopting such improvements is good. Helping others determine how technology may help them also can be good. Conceiving expectations on how others should also adopt these improvements or be left behind by the changing world is bad.
In other words: create change, not breaking changes. Technology, properly developed, influences its adoption naturally. Adoption by coercion generally indicates false advancements. Real technology helps, never hinders.
Engineers, coders, planners and all other detail-oriented professionals often take flak (sometimes rightly so) for hyperfixating on minutiae.
The question for all detail-oriented thinkers to ask ourselves is "is this detail important (here and now)?"
Sometimes the answer is yes -- and that needs to be given legitimacy.
Sometimes the answer is no -- and we need to learn to let go.
It's a two way street and often it's very difficult for each to understand the other's communication style.
Why is Mastodon, a free platform run by volunteers that I didn't even know existed last week and has experienced explosive growth in a few days, slow?
Like so many people, I've been experimenting with the image generation AIs recently made publicly available -- there's an interesting recurrent outcome, especially it seems with topics the AI may understand via fewer representations, that when lacking enough information to fully form a concrete picture, it starts adding extra objects into the scene that relate to the theme of that picture -- ask for an ironsmith and get all kinds of extra tools, ask for a medieval archer and get extra weapons.
An issue I had early on in math, and which I've seen others struggle against, is getting unnerved by the natural abstruseness of symbology. All the δs and ∀s and whatnot make math seem much, much more complex than it often is.
It's true - the nitty gritty of calculations and derivations can get intense, especially for people (like me) who
started out behind.
But 9/10 times in the world of math, all you need is to get what all those symbols are trying to say -- which is usually much simpler.
Even the introduction to the Meditations of Marcus Aurelius makes for powerful reading, it stands on its own and makes a useful reference in times of confusion or attritional deviation (that being personal experience).
The alzheimers research scandal indicates most or all alzheimers-targeting drugs are at best ineffective, at worst harmful.
Why does it seem that, anecdotally, many of the drugs work just fine?
I have personally heard *many* reports that the advance of alzheimers slowed with the use of meds.
Are these one and all placebo effect?
Or potentially, there is another factor at play here?
The research, while largely invalidated, may still lie along roughly the right lines.
A lot of back and forth here re DDG: https://news.ycombinator.com/item?id=31490515
The unfortunate but practical truth is that the combination of people who care about privacy AND are invested in technology is relatively small.
For a given individual who fits these above categories, getting heated over privacy companies being imperfect is, imo, not beneficial. It's understandable to feel "betrayed" but there are more practical ways to defend your privacy, and that bottom line is what deserves the focus.
Great example of well-intentioned technology oversight liable to go awry:
But it does look like it's receiving a) attention and b) debate, which is the expected and healthy process for bills of this nature.
There are two sides to remote work, and anecdotally I've seen deviations to either one. Nonetheless, interesting article https://www.essence.com/news/money-career/employees-say-remote-work-improved-mental-health/
This sort of outcome is unfortunate but predictable ---any tech organization has administrative decisions to make, and while there are better and worse ways to handle such situations, and we can critique those approaches, criticizing the outcome itself is questionable, in that the choice is intended to prioritize the organization's own interests.
This is where self-ownership technologies become more relevant.