In NVIDIA Jetson land, this is the time of anticipation. The NVIDIA GTC Developer Conference for 2023 https://www.nvidia.com/gtc/ is just 10 days away. It’s certainly worth the time to register to gain access to the large number of sessions. The sessions cover virtually every part of the NVIDIA ecosystem, from super computing to the Jetson AI on the Edge. Best of all? It’s free!
If you have any interest in computing, graphics or robotics, the talks are incredibly interesting. I cannot stress this enough. Sure, there’s some duds here and there. Overall, the signal to noise ratio is incredibly high.
This is also the time of year when everyone is getting their next generation of products ready. Many new products will be announced built around the Jetson. I’m particularly excited about one of them, but mums the word until the GTC announcement makes it official. I’ll review it, natch.
As of this writing, the Jetson Orin NX 16GB modules https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-orin/ are beginning to ship. We are also seeing complete systems with the Orin NX modules shipping. This is a couple of weeks ahead of schedule, hopefully that’s a good sign that the supply chain is clearing up.
The last newsletter provides insight that ChatGPT is important as a user interface paradigm shift. The user interface of modern search engines are much the same as they were 20-25 years ago. You type in search terms, the search engine returns a list of web links to match those terms. The user explores the links, some of which may even be related to the query. Helpfully, there are usually hundreds of links to explore. The links spread out over many web pages.
The underlying technology of the search engines is complex. Over the last 25 years, search technology has advanced tremendously. Billions spent in research and development bring comprehensive search results back in milliseconds. Anywhere on the globe. As a user, you can see some of that peeking out when making a Google search.
One example is the feature “People Ask”. This feature provides summaries of related searches. Google use one of the early adopters of machine learning. It is this machine learning that provides the summaries. This is very much akin to the Large Language Model (LLM) that underlies ChatGPT.
The graphics and presentation of the search results are more attractive now. But let’s not kid ourselves, it’s new paint on a 20 year old interface. Do you remember what your computer was like 20 years ago?
The ChatGPT interface changes that. Simply allow users to ask questions directly in natural language. Give back the answer, remembering the context of the question. The user can further refine the question from the given answer. This eliminates the need for users to break down their questions into search terms. In other words, the computer serves the user. Not the other way around. It certainly seems like magic.
The human to computer interface is important.
Which brings us to todays topic. Mixing desktop application interfaces and web page interfaces.
When people use a desktop application, they expect a certain paradigm. The paradigm is specific to the machine in use. A Macintosh has a certain look and feel. So does Windows. Linux too. Menu bars, scroll bars, dialog boxes, preference settings, and so on. The style guides are usually formal, and take up many pages of documentation.
On the other hand, web pages are informal. There are a lot of ‘creative liberties’ taken. A web page operates in its own world. Mobile interfaces are slightly more strict. If you use a web page as an interface on a mobile device, it should conform to the platform. There are guidelines. Google has Material, Apple has iOS.
Several years ago, people began migrating their desktop applications to the cloud. Many companies have done this, one of the pioneers is Adobe. By running in the cloud, Adobe is able to impose their own user interface regardless of platform. Adobe products are complex in their own right. Their user interfaces reflect that.
But here’s the thing. For cloud applications, many coders/designers use web interfaces. Except the apps are not presented in a web browser, they are in a stand alone desktop application.
Here’s the problem. People expect desktop applications to act like other desktop applications on their machine. Web pages should act like web pages, usually in a browser.
Recently I was using the NVIDIA SDK Manager, which reminded me of this issue. The SDK Manager announces itself as a desktop application. Instead, it presents as a group of web pages combined together.
I’m not beating on SDK Manager in particular, many applications now exhibit this fault. From the perspective of user experience, it breaks the magic. There is a reason that Apple has been so successful with their desktop environment. The user expects a consistent application experience. Windows, menus, buttons, dialogs. They are all consistent. Even if you don’t know how to effectively use an application, it is familiar. Microsoft Windows does this to a lesser extent.
Mobile device standards are much the same. Send an app for approval for the Apple Store. Apple will reject it if it does not meet the user interface guidelines. No soup for you.
Amongst the reasons that Linux fails as a desktop is a consistent user experience. Crossing the stream between desktop and web apps magnifies it.
What does user confusion mean to you, a developer? More support and more questions to answer. For no good reason. In fact, it upsets users and makes them look at the app with disdain.
As a developer, know the difference between a web and a desktop interface. If you find yourself using web interface elements in your desktop application, check yourself. The following menus ,
- Hamburger Menus:
- Kebab menus:
- Meatball menus:
are all web page elements. Desktop applications should use them sparingly, if at all.
Many web page apps mimic desktops. This has similar issues. The user has expectations, you must meet them. If you find yourself reimplementing an entire desktop interface on a web page, do some self reflection. What are you trying to deliver? Is this the experience you originally planned to deliver, or is it a result of feature creep?
There is certainly nothing wrong with using a web page as an underlying canvas in a desktop application. You must make sure that there is consistency between the two.
Each of these paradigms have a barrier to entry. A desktop application is a real commitment, the minimum feature set is large. Web pages less so.
Now, many developers build tools or small applications for their own use. As long as the interface suits them, that’s the only important thing. This is correct, up to a certain point.
That is the tricky part. Where is the “certain point”? Is it when 5 people are using the app? 25? 100? 1,000? Is it an internal tool, or public/customer facing? We all know if you make it open source, it has the potential to reach a very large number of users. This is the first point in projects where technical debt starts to obviously build.
What do you think? When is the right time to sit down and design then implement a proper user interface? Do you let it organically grow and do a just in time redesign? All the user interface people will tell you to call them in first. Is that a cost you are willing to take early on? Or do you just wait until the pain train delivers and you have to pay the price?
In the next newsletter, we’ll actually be covering Jetson news! There will be lot of bright new and shiny products worth exploring. Stay tuned!