JetsonHacks

Developing on NVIDIA® Jetson™ for AI on the Edge

This is a sample newsletter. Sign up to get email delivery!

Hello there!

They tell me that this is the most wonderful time of the year. I use this time for reflection on what was, what is, and what will be in this year and next.

In the US, we celebrate Thanksgiving in late November. Thanksgiving means different things to people. It’s an extension of the ages old fall harvest celebration, where people spend time with family and friends. The day centers around giving thanks not only for the harvest, but also the blessings of the past year.

I just want to make sure that you know I am thankful for your participation in the JetsonHacks community. I hope that you are getting true value out of your participation.

The big Jetson news for the month is that the NVIDIA Jetson AGX Thor and AGX Orins are on a holiday sale in the US. The AGX Thor is 20% off and the AGX Orin is 50% off. The Orin Nano remains at $249. The sale ends January 11, 2026: 

You can get them on Amazon:

Jetson AGX Thor Developer Kit ($2799 20% off): https://amzn.to/4oezEZn

Jetson AGX Orin Developer Kit ($999 50% off): https://amzn.to/4inNE1A

or on the NVIDIA Marketplace: https://marketplace.nvidia.com/en-us/enterprise/robotics-edge

Note that the AGX Orin has been popular on Amazon, and various resellers have been hiking the price up a bit when stock runs low at NVIDIA. The price should be $999. I’ll also note that with the rapidly rising cost of memory (the Mempocalypse), these may be more of a bargain than originally intended.

The second big piece of news is that the Jetson AI Lab : https://www.jetson-ai-lab.com just went through a major overhaul and upgrade. There has been a lot of work in bringing tutorials and examples up to date, and well worth the time to check out.

One of the JetsonHacks community members (Mehrdad Majzoobi) created an aluminum enclosure for the Jetson Orin Nano that sells on Shopify: https://shop.getubo.com/products/nvidia-jetson-nano-enclosure. I know these are popular, and the introductory price of $49 makes it a good value.

I’ve been spending a lot of time “Thinking about Thinking” and how to learn about subjects in a more valuable manner. I think over the next few months I’ll spend more time working on how to better integrate AI into edge devices. We hear the “AI on the Edge” term a lot, but what does that actually mean? Also, how do we use new AI tools to actually help us, and avoid them making us complacent? Along those lines, here’s some thoughts.

Think about what you’re doing

Back in the stone ages, when ChatGPT first appeared in November 2022, we were introduced to a brave new world. The promise was simple. Artificial intelligence would reshape how we create, work, and think. We would type in a prompt describing what we wanted to read, see, or hear, and the AI would create it. Without the drudgery of programming, something far more civilized would take its place.

And it would not just be software. AI would be embodied in robots in the physical world as well. Cars that drive themselves. Home assistants that promise to remove the mundane tasks of life. No more being forced to handle everyday chores. Everything that feels like work would be eliminated. Much of the idea behind the ideal world of Star Trek, but this time for real.

The month before ChatGPT was released, Elon Musk bought Twitter. Almost immediately, new management cut the Twitter workforce in half, and then in half again within a few months. This headcount reduction became a new poster child for the way even an established technology company could be run.

You can imagine that other CEOs saw this and crafted a new employment story. Not that job reduction is painful, but that it is necessary and efficient.

In that story, generative AI became the lever. Smaller, more agile technical teams. Powerful tools in the hands of the best of the best. Capital expenditures on AI infrastructure instead of employees and salaries. That became the bellwether that prominent companies now benchmark against.

What rarely shows up on the balance sheet is the cognitive debt that comes with it: lost institutional memory, fewer people questioning assumptions, and less deep, hard-won understanding embedded in the organization.

Yeah, but what’s it do?

Everyone has a plan until they get punched in the mouth.” — Mike Tyson, boxing champion

Of course, on paper this all sounds great. Out in the wild, the results can be both amazing and completely underwhelming at the same time. If you’ve been in the technology game for any length of time, you know about hype cycles. Hype cycles are independent of the usefulness of the product. It’s not surprising that a lot of AI tools give great demos. It is also not surprising that, in many cases, those great demos fail to scale to production use.

That’s not to say there aren’t amazing applications being built with AI, or astounding research results. But as with many technology milestones, people make the mistake of viewing new tools as replacements for existing tasks. The real power of new technology is to change paradigms, not to act as an incremental improvement to existing ones. 

This has been true about technology for a very long time. When ancient Egyptians developed fractions to divide bread and grain, they weren’t optimizing arithmetic. They were inventing a new mental model for sharing scarce resources. Just as fractions gave ancient societies a new way to reason about division and fairness, the printing press gave people a new way to reason about knowledge. It transformed ideas from fragile, hand-copied artifacts into stable, shareable objects that could move freely through the world.

The smartphone is a more recent example. The iPhone did not simply improve the mobile phone. It turned the phone into a general-purpose, networked computer that people carry everywhere. Maps replaced navigation skills. Contacts replaced memorized phone numbers. Notifications reshaped attention. Entire categories of tools collapsed into a single device. What changed was not convenience alone, but how people orient themselves in the world and how much thinking they externalize.

That’s the question worth asking: What does this AI thing actually do?

What’s it cost?

I’m sure that you, just like me, jumped on the LLM and AI wagon early. I paid my $20 a month to OpenAI and got to work. Other LLMs came around, and I bought subscriptions to those too. It was great. I could ask LLMs all the questions I wanted and argue with them to my heart’s content. Even better, I could have them argue amongst themselves.

Early on, during one of my arguments with an LLM, I thought to myself, maybe this isn’t a good use of my time. This was back in the heyday of context engineering, well before vibe-coding became a thing, if you can even remember back that far.

People often talk about knowing history, that it repeats or at least rhymes. I’m writing this on an older Apple Macintosh, 5 BAI (Before Artificial Intelligence). Apple has added AI features to the Mac, but this machine isn’t fast enough to run them efficiently. The result is lag, along with word substitutions and additional sentence fragments appearing where I didn’t intend them.

I took typing in middle school, so I rarely look at what I’m typing now. I assume the keys I press result in text appearing where I expect it to appear. Now, instead of writing something once, I get to write it several times so I can correct, and mostly remove, the substitutions and “corrections” that AI introduces.

This is helpful for short emails. Instead of writing a quick reply and sending it on its way, I can now carefully review and craft the message multiple times. That extra effort doesn’t disappear. It accumulates as cognitive debt.

We all know what the saying “There is no free lunch” means. Everything comes at a cost.

Put aside the quality of the generated text for the moment. Consider the idea of cognitive cost and cognitive debt. If you use LLMs instead of writing yourself, or even instead of searching, how much of that information do you retain? How well written is the final result? What learning skills are you actually exercising? What value are you adding?

The results aren’t surprising. A recent study, Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Tasks,” explores exactly this question. It’s worth reading in full, though there is a conclusion section if you’re short on time.

The most striking observation is what we already suspect. People tend to take LLM output at face value. They use it as a reference, rather than as a starting point for questioning and exploration. This works a surprising amount of the time, but when it fails, it often does so spectacularly. And it’s stubborn. Once an LLM goes down a particular path, it can be difficult to pull it back, even when it’s clearly wrong or hallucinating.

You won’t remember much from your session when you choose to have the LLM write for you. You’ll type the prompt, and then copy the generated text after a quick proof read. 

My feeling is that for many tasks, especially writing, AI should make the process longer, not shorter. It should help us explore paths we wouldn’t consider on our own. It should surface supporting research, clarify positions, and help us construct steelman arguments from multiple perspectives. Think of it not as an editor or creator, but as an assistant. Something to bounce ideas off of.

The real danger is not that LLMs get things wrong, but that they get things right often enough to stop us from thinking deeply. To me, the way we should be thinking about this is not what current tasks AI replaces, but rather what entirely new thing does AI make possible?

Happy Holidays!

Facebook
Twitter
LinkedIn
Reddit
Email
Print

4 Responses

  1. Very insightful, I couldn’t stop reading to the end, thanks for the excellent thoughts. It is indeed hard to crystalize out in our minds where this speeding freight train leads…perhaps I should ask ChatGPT? 😉

    1. You are welcome, thank you for the kind words. I’m sure ChatGPT has an opinion on the matter, just as long as you remember that it is a liar that lies 🙂 When the ‘experts’ tell you that they know what the future brings, just remember that they are always near sighted and don’t wear prescription eyewear.

  2. Your blog prompted a very stimulating lunch conversation. A key question you asked above was “What value are you adding?”. Pretty big deal here, everybody currently lives their life in balance with ecosystems that place value on our contributions, or mere existence, which in turn sustains us. Corporations, churches, families, and governments use money and other leverage to tug on the web of forces that bind us to them, and in theory, we all thrive. Human politics wildly debates how this should work, and forward we stumble. AI is an enormous disruption to the balance of this system, the balance of power that drives it, and the value of our contributions. I don’t know what to tell current students to major in such that they have a bright future in this new world.

    On the other hand, I wonder if your caution about how we interact with AI, per your own metaphor, is much like the prior caution about “staring at your smartphone all day” we had about ourselves and our children, after that disruptive technology appeared. Today, we can see that the world indeed changed, and people below a certain age cannot imagine more than an hour apart from their smartphone–there was no stopping it. At the same time, despite all the various worries, I think most would agree that there was nothing but upside about the arrival of the smartphone, except I guess the commoditization of our personal information. So, perhaps this will somehow all work out, dark horizon or not.

    1. I’m glad that you were able to get a subject for a lunch conversation!
      I’m in California, so they have all sorts of ‘new age’ sayings here. The one I think that applies here is ‘Be mindful’. For people who grew up without smartphones, it’s pretty easy to tell when the glowing rectangle should be put down. For younger folks, not so much. It’s like people with TVs in their bedrooms, almost any doctor well tell you that it effects sleep in a very negative manner. Sleep is one of the most important things that people do, it is a major factor in lifelong health. Yet we casually throw it away without thought.

      There are certain things smartphones do that are absolutely amazing. Driving directions, for one. Before that, everyone had maps; any sort of excursion was handed down as tribal knowledge (‘Turn left at the grocery store’) or required planning like a Magellan voyage. With the phone, we could suddenly put away our compass and sextants and just go places. No anxiety, no ‘I wonder how long it takes to get there’, the phone tells you all that and traffic conditions at the same time. We used to have to listen to the radio to get traffic reports whenever we got in the car. The whole idea of terrestrial radio suddenly became quaint, because everyone carries their own entertainment with them now.

      The skill is to be mindful of how you’re using technology. People need to be bored, just so they think. Daydream. In a lot of ways, that’s what driving was. Hit the open road and free yourself from having to do or think about something. Just daydream. If someone is bored now, they just pick up the phone or tablet. They aren’t bored. They don’t think, they get their little dopamine hits and move on.

      However, a simple exercise : bring up a shorts feed on YouTube or TikTok. Most of the time, the material is directed at your interests. Watch for 15 or 20 minutes (like a lot of kids do). Wait 5 minutes. Then, get out a piece of paper and pen and write down what you saw, or might have learned. Did you find the time valuable?

      People who think deeply train themselves to think. That has been true for thousands of years. People study something, and then try to recreate it with what they learned. Then they go back and compare it to the original, looking for differences. Rinse, repeat. Galileo never had much formal education, and neither did Benjamin Franklin. Bill Gates had the start of an education, but would take a month out of the year to read, away from Microsoft. But they seemed to pick some things up.

      Also remember that simply watching or reading something doesn’t mean that you’ve learned it. You have to build it into your body, that’s why it’s important to use your hands when learning. Write things down, draw. That’s why AI isn’t intelligent; there’s no real world consequences to the tokens they spew. They’re just rhyming with the source material that they were trained on.

      The college questions is pretty easy. STEM. Regardless of what happens in the AI world, having a grasp of how the ‘real’ way the world works will still be valuable. Those are skills you can stack, and certainly the easiest way to be valuable. This gives a framework on how to think.

      I would say second is something more general, which is communications. No matter what AI does, communications skills will always be valuable. People will always be people, and what they respond to won’t change. There will always be sales people, marketing folks, politicians or ‘influencers’ even if by other names. The real question is ‘What is the value of a college education?’ You figure AI will get better at education, you need to decide ‘What is education?’ and is it important to get it at an institution. There’s the social aspect, how do you value that? Those questions are on an individual basis, as that age some kids benefit from being more social; others might benefit from reigning it in. These are really hard questions that a parent and student need to really think through.

Leave a Reply

Your email address will not be published. Required fields are marked *

Disclaimer

Some links here are affiliate links. If you purchase through these links I will receive a small commission at no additional cost to you. As an Amazon Associate, I earn from qualifying purchases.

Books, Ideas & Other Curiosities