JetsonHacks

Developing on NVIDIA® Jetson™ for AI on the Edge

JetsonHacks Newsletter – June 2024

This is a sample newsletter. Sign up to get it email delivery!

Hello there!

First the news. Lots of good stuff happening.

Platform Services

NVIDIA JetPack 6.0’s Jetson Platform Services have just been released! Supporting cloud-native technologies, these services allow efficient deployment and management of microservices. The microservices enhance edge AI applications with advanced video processing, AI inference, and analytics capabilities. These services are essential for building robust AI applications on Jetson devices, streamlining workflows, and improving deployment efficiency.

For more details, visit the ​NVIDIA Developer Blog​.

AI-Based Steering

Here’s a cool project. Arrow Electronics and NVIDIA have collaborated to develop an AI-based steering system for the SAM Car, a racecar designed for disabled drivers. Leveraging NVIDIA’s AGX Orin processors and AI frameworks, the system uses high-resolution, dual-axis cameras and deep learning algorithms to interpret driver inputs, controlling the vehicle’s steering, throttle, and brakes in real-time. On a vehicle that can go 213 mph! ​Arrow Electronics and NVIDIA Collaborate on New AI-Based Steering System for SAM Car​

Planet Labs

How are you going to keep Jetsons out of space? Planet Labs is partnering with NVIDIA to enhance the onboard processing capabilities of its upcoming Pelican-2 satellite using NVIDIA’s Jetson AGX Orin platform. This integration aims to provide advanced AI-driven intelligent imaging and rapid data insights The collaboration will enable near real-time data processing and delivery directly from orbit, significantly improving the satellite’s ability to monitor and analyze Earth phenomena such as forest fires and natural disasters.

For further details, visit the ​Investing.com article​.

Allxon Cloud Serial Console

JetsonHacks will be doing a review soon of the Allxon Out-of-Band Cloud Serial Console. This combination of hardware and software allows you to fully monitor and manage Jetson devices remotely. This should greatly simplify remote admin, and with hardware in the loop it promises to be a robust solution. ​https://www.allxon.com​

Jetson AI Lab

Last but not least, the Jetson AI Lab Research Group is at it again! At the last meeting, Dusty Franklin showed off an impressive Agent Studio demonstration. Agent Studio includes a node based editor for connecting different sensors and AI components together with AI Agents which should reduce the amount of code you need to write very interesting applications. Looky here at the demo from the last meeting: ​JETSON AI LAB | Research Group Meeting (6/11/2024)​

Understanding the Impact of AI on Computing and Programming

For the past year and a half, AI has been a hot topic, with many claiming it will program, take our jobs, or even destroy humanity. That’s a lot to unpack! How exactly is AI going to do these tasks? We’ve seen impressive demos, but world domination seems more than a bit far off. 

Let’s be clear: the idea of world domination by an AI seems strange. However, the notion of an AI company using the ‘feature’ of being able to recognize a cat to continuously record your and your family’s video and audio life is straight from Dr. Evil’s playbook. You can understand why the interest in Jetson and compute on the edge to keep everything ‘in house’ and private as it were.

From Deterministic to Stochastic Computing

A fundamental change from deterministic to stochastic computing seems odd. Historically, computing has been deterministic, meaning fully repeatable processes. People expect machines to do exactly what they are told. The machines are designed that way. Stochastic machines, built by the Linear Algebra Mafia, operate differently. Getting the same answer twice is hard. But for tasks like visual processing, this is beneficial. Eventually, we expect computer vision to match human vision and then some.

AI in Programming and Mathematics

What about writing programs or doing math? Jensen Huang of NVIDIA suggests the future of coding is natural language. Curious, I spent a week with ChatGPT to explore this idea.

The Nature of Programming: Past and Present

When personal computers were much, much smaller and slower, programmers knew every detail of the system. Assembly language was common. Drawing lines on a screen with a hand-coded Bresenham’s line algorithm was a given. Space was limited, so there was no room for waste. Operating systems were minimal. If that weren’t enough, most people ended up stepping through their programs in an assembly language level debugger. They KNEW the system!

Today, systems are so large that it’s hard to specialize in just one part, let alone many. Networking and smart devices add to the complexity. Entire industries focus on specific subsections and applications. Websites at scale are incredibly complicated. Huge codebases rely on many components, with no guarantees.

The Result: Limited Knowledge and Increased Complexity

Programmers, now called developers, can’t know everything about what they’re creating. We’ve shifted from knowing everything to knowing just a little. Libraries and components handle heavy lifting. ‘Tools’ like Stack Overflow and GitHub assist with common tasks or problems.

Computing stacks are fragile. Changes in base components like operating systems or programming languages break things. You know this: moving from one Jetson release to the next is painful. It’s a never-ending upgrade cycle. It would be nice if apps ran without needing to understand the whole system when a library of which you only use a very small slice changes.

Symbols and Abstract Thinking

Using symbols to aid in abstract thinking is one of man’s greatest inventions. Symbols like π, ∑, and ∫ allow clear communication in a universal language. Describing mathematical equations in natural language is painful. Visual interfaces and human interaction descriptions are equally challenging.

Experimenting with ChatGPT

I experimented with ChatGPT, specifying personas like Sally from Marketing, Don the CTO, and James the Lead Developer. I asked them to implement features and build a web application to keep track of a Persona database. 

When ChatGPT first came out, it was a nightmare trying to produce code. It’s a bit better now. For this task, it chose a Flask backend and a ReactJS frontend. However, once scripts grow beyond about a page and a half, issues arise. There are paths that lead to unrecoverable states or infinite loops when explaining errors or tasks.

You can explain and scold all you want, but a LLM doesn’t learn from its mistakes like a junior coder. Typically, you can tell a human, “Here’s a mistake, learn from it.” The LLM? Not so much.

The Role of AI in Large Organizations

In large organizations, management gurus teach that the square root of the number of people does 50% of the work, a variation of the Pareto Principle. For 1,000 people, that’s about 32 doing half the work. Does an LLM act as a force multiplier in research and development? In getting work done? Or does it mean management will eliminate jobs, thinking the AI will take up the slack? What does that mean at scale?

Much of developing applications is boilerplate. Developers need to know which headers to import, libraries to load, and platform dependencies. Why do we need to track imports, headers, libraries, and modules today? We have enough computing power to manage everything installed on a dev machine and serve nicer development environments. Simply making an OS call or reading the time, why do we have to figure out which headers and libraries to import? If LLMs handle housekeeping by boilerplating apps and acting as super help, developers can focus on adding value. Oh, and to have the LLM explain the code is very useful for maintainers. Remember that maintaining applications accounts for 80% of the cost of most commercial systems over time. However, it’s not clear that stochastic systems are great for creating and building deterministic systems.

That’s not to say LLMs don’t have application! There are some things LLMs are really good at, translating text, summarizing, organizing and so on. And by really good at, I mean amazing. Here’s a key point: One LLM is interesting, multiple LLMs and AI Agents will eventually get you as close to an answer as people can in a lot less time and for a lot less money. The current going rate for consumer LLMs like ChatGPT Plus is around $20 per month. How much work can you get from a person for $20 nowadays?

It’s easy to imagine a scenario where you’re doing research, writing a paper, translating text or some other task and you ask a LLM to help. The response is usually in the ballpark and makes sense when you read it. Now imagine the second part: you ask another LLM (or two) to check the work and critique it, and then send it back to the original. Like hiring an editor, going back and forth until they are both “satisfied” the work is accurate and polished. What is the value of that work product?


Conclusion

AI is changing computing and programming, shifting from deterministic to stochastic computing. While AI can help with code generation, many challenges remain. Especially with complex tasks and learning from mistakes. What’s your prediction for the future of AI coding?

—-

Thank you for taking the time to read this; I hope all is well your way. As always, reply to this email if you want to share some of your thoughts or you have an interesting story, product or three to share.

Jim

Facebook
Twitter
LinkedIn
Reddit
Email
Print

3 Responses

  1. Hi there Jim, I think we are in need of a breakthrough to allow GPTs to self-learn and realise their own mistakes if they are to become truly useful and live up to the hype. They are too much of a liability to see standalone use at the moment without a human overseer.

    Unfortunately that will also be the point at which anyone capable of doing their job remotely will be out of a job. Interesting times.

    1. Interesting indeed! From what I understand, there’s a lot more promises than results being delivered. LLMs themselves don’t “learn”, they already know what they’re going to know by the time it gets to the end user. However, there are other ways for it to “learn”. One way is to have a data set and do a version of transfer learning. Another is setting up a “RAG” system which basically is a semantic lookup at a set of data defined outside of the LLM itself. You do a semantic lookup and pass that to the LLM along with your prompt. You’ll hear terms like “Vector Databases” bandied about for this approach.

      In some sense, by its nature a LLM just generates hallucinations based on the training set it sees. It’s all the safety guardrails that people put up afterwards that makes it seem like it’s making sense. It’s so early in the process/discovery it’s difficult to say what is or isn’t possible. It would seem that multiple agents running against LLMs could help get “real” answers, but that’s still not proven.

      As for remote workers, that’s another interesting area. The switch to remote work is such an alien concept to ambitious people, especially in the tech area, that it’s difficult to figure what it means to “normies”. Sure, if you have a remote job of filling out insurance forms and calling clients all day, then that job may be at risk. Tech support, the same thing. But that’s what outsourcing was all about, wasn’t it? It doesn’t matter if it’s a machine or some person in a foreign country making very low wages, that job is at risk. But people are adaptable, and can do things that it’s difficult to pre-train an AI to do.

      With that said, I would think at least in the short term that the AIs would be supplemental rather than replacements. We’ve seen several companies that place a chatbot public facing and get much less than satisfactory results. That would leave me to believe that we’ll see much less hiring in companies in the short term, as AIs will be brought in to help. At the same time, the economy is always such a big factor that it’s hard to predict what will happen.

      A real key is understanding how companies/governments work. There are few 1,000+ person companies that would not be able to survive separating from 10% of their work force assuming that the core business is not under attack. We see this all the time. The remote workers are an easy target in that sense, out of sight, out of mind.

      1. Very good point about off-shoring. I suppose if businesses are able to trust and control AI more than they can control off-shore employees it might result in more use of AI.

Leave a Reply

Your email address will not be published. Required fields are marked *

Disclaimer

Some links here are affiliate links. If you purchase through these links I will receive a small commission at no additional cost to you. As an Amazon Associate, I earn from qualifying purchases.

Books, Ideas & Other Curiosities