This is a sample newsletter. Sign up to get email delivery!
Hello there!
They tell me that this is the most wonderful time of the year. I use this time for reflection on what was, what is, and what will be in this year and next.
In the US, we celebrate Thanksgiving in late November. Thanksgiving means different things to people. It’s an extension of the ages old fall harvest celebration, where people spend time with family and friends. The day centers around giving thanks not only for the harvest, but also the blessings of the past year.
I just want to make sure that you know I am thankful for your participation in the JetsonHacks community. I hope that you are getting true value out of your participation.
The big Jetson news for the month is that the NVIDIA Jetson AGX Thor and AGX Orins are on a holiday sale in the US. The AGX Thor is 20% off and the AGX Orin is 50% off. The Orin Nano remains at $249. The sale ends January 11, 2026:
You can get them on Amazon:
Jetson AGX Thor Developer Kit ($2799 20% off): https://amzn.to/4oezEZn
Jetson AGX Orin Developer Kit ($999 50% off): https://amzn.to/4inNE1A
or on the NVIDIA Marketplace: https://marketplace.nvidia.com/en-us/enterprise/robotics-edge
Note that the AGX Orin has been popular on Amazon, and various resellers have been hiking the price up a bit when stock runs low at NVIDIA. The price should be $999. I’ll also note that with the rapidly rising cost of memory (the Mempocalypse), these may be more of a bargain than originally intended.
The second big piece of news is that the Jetson AI Lab : https://www.jetson-ai-lab.com just went through a major overhaul and upgrade. There has been a lot of work in bringing tutorials and examples up to date, and well worth the time to check out.
One of the JetsonHacks community members (Mehrdad Majzoobi) created an aluminum enclosure for the Jetson Orin Nano that sells on Shopify: https://shop.getubo.com/products/nvidia-jetson-nano-enclosure. I know these are popular, and the introductory price of $49 makes it a good value.
I’ve been spending a lot of time “Thinking about Thinking” and how to learn about subjects in a more valuable manner. I think over the next few months I’ll spend more time working on how to better integrate AI into edge devices. We hear the “AI on the Edge” term a lot, but what does that actually mean? Also, how do we use new AI tools to actually help us, and avoid them making us complacent? Along those lines, here’s some thoughts.
—
Think about what you’re doing
Back in the stone ages, when ChatGPT first appeared in November 2022, we were introduced to a brave new world. The promise was simple. Artificial intelligence would reshape how we create, work, and think. We would type in a prompt describing what we wanted to read, see, or hear, and the AI would create it. Without the drudgery of programming, something far more civilized would take its place.
And it would not just be software. AI would be embodied in robots in the physical world as well. Cars that drive themselves. Home assistants that promise to remove the mundane tasks of life. No more being forced to handle everyday chores. Everything that feels like work would be eliminated. Much of the idea behind the ideal world of Star Trek, but this time for real.
The month before ChatGPT was released, Elon Musk bought Twitter. Almost immediately, new management cut the Twitter workforce in half, and then in half again within a few months. This headcount reduction became a new poster child for the way even an established technology company could be run.
You can imagine that other CEOs saw this and crafted a new employment story. Not that job reduction is painful, but that it is necessary and efficient.
In that story, generative AI became the lever. Smaller, more agile technical teams. Powerful tools in the hands of the best of the best. Capital expenditures on AI infrastructure instead of employees and salaries. That became the bellwether that prominent companies now benchmark against.
What rarely shows up on the balance sheet is the cognitive debt that comes with it: lost institutional memory, fewer people questioning assumptions, and less deep, hard-won understanding embedded in the organization.
Yeah, but what’s it do?
“Everyone has a plan until they get punched in the mouth.” — Mike Tyson, boxing champion
Of course, on paper this all sounds great. Out in the wild, the results can be both amazing and completely underwhelming at the same time. If you’ve been in the technology game for any length of time, you know about hype cycles. Hype cycles are independent of the usefulness of the product. It’s not surprising that a lot of AI tools give great demos. It is also not surprising that, in many cases, those great demos fail to scale to production use.
That’s not to say there aren’t amazing applications being built with AI, or astounding research results. But as with many technology milestones, people make the mistake of viewing new tools as replacements for existing tasks. The real power of new technology is to change paradigms, not to act as an incremental improvement to existing ones.
This has been true about technology for a very long time. When ancient Egyptians developed fractions to divide bread and grain, they weren’t optimizing arithmetic. They were inventing a new mental model for sharing scarce resources. Just as fractions gave ancient societies a new way to reason about division and fairness, the printing press gave people a new way to reason about knowledge. It transformed ideas from fragile, hand-copied artifacts into stable, shareable objects that could move freely through the world.
The smartphone is a more recent example. The iPhone did not simply improve the mobile phone. It turned the phone into a general-purpose, networked computer that people carry everywhere. Maps replaced navigation skills. Contacts replaced memorized phone numbers. Notifications reshaped attention. Entire categories of tools collapsed into a single device. What changed was not convenience alone, but how people orient themselves in the world and how much thinking they externalize.
That’s the question worth asking: What does this AI thing actually do?
What’s it cost?
I’m sure that you, just like me, jumped on the LLM and AI wagon early. I paid my $20 a month to OpenAI and got to work. Other LLMs came around, and I bought subscriptions to those too. It was great. I could ask LLMs all the questions I wanted and argue with them to my heart’s content. Even better, I could have them argue amongst themselves.
Early on, during one of my arguments with an LLM, I thought to myself, maybe this isn’t a good use of my time. This was back in the heyday of context engineering, well before vibe-coding became a thing, if you can even remember back that far.
People often talk about knowing history, that it repeats or at least rhymes. I’m writing this on an older Apple Macintosh, 5 BAI (Before Artificial Intelligence). Apple has added AI features to the Mac, but this machine isn’t fast enough to run them efficiently. The result is lag, along with word substitutions and additional sentence fragments appearing where I didn’t intend them.
I took typing in middle school, so I rarely look at what I’m typing now. I assume the keys I press result in text appearing where I expect it to appear. Now, instead of writing something once, I get to write it several times so I can correct, and mostly remove, the substitutions and “corrections” that AI introduces.
This is helpful for short emails. Instead of writing a quick reply and sending it on its way, I can now carefully review and craft the message multiple times. That extra effort doesn’t disappear. It accumulates as cognitive debt.
We all know what the saying “There is no free lunch” means. Everything comes at a cost.
Put aside the quality of the generated text for the moment. Consider the idea of cognitive cost and cognitive debt. If you use LLMs instead of writing yourself, or even instead of searching, how much of that information do you retain? How well written is the final result? What learning skills are you actually exercising? What value are you adding?
The results aren’t surprising. A recent study, “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Tasks,” explores exactly this question. It’s worth reading in full, though there is a conclusion section if you’re short on time.
The most striking observation is what we already suspect. People tend to take LLM output at face value. They use it as a reference, rather than as a starting point for questioning and exploration. This works a surprising amount of the time, but when it fails, it often does so spectacularly. And it’s stubborn. Once an LLM goes down a particular path, it can be difficult to pull it back, even when it’s clearly wrong or hallucinating.
You won’t remember much from your session when you choose to have the LLM write for you. You’ll type the prompt, and then copy the generated text after a quick proof read.
My feeling is that for many tasks, especially writing, AI should make the process longer, not shorter. It should help us explore paths we wouldn’t consider on our own. It should surface supporting research, clarify positions, and help us construct steelman arguments from multiple perspectives. Think of it not as an editor or creator, but as an assistant. Something to bounce ideas off of.
The real danger is not that LLMs get things wrong, but that they get things right often enough to stop us from thinking deeply. To me, the way we should be thinking about this is not what current tasks AI replaces, but rather what entirely new thing does AI make possible?
Happy Holidays!
The post JetsonHacks Newsletter Sample, The Holidays, 2025 appeared first on JetsonHacks.
