55.9 F
Los Angeles
Thursday, April 25, 2024

Trump Lawyer Resigns One Day Before Trial To Begin

Joseph Tacopina has filed with the courts that he will not represent Donald J. Trump. The E. Jean Carroll civil case is schedule to begin Tuesday January 16,...

Judge Lewis A. Kaplan Issues Order RE Postponement

On May 9, 2023, a jury found Donald J. Trump liable for sexual assault and defamation. The jury awarded Ms. Carroll $5 million in damages. Seven months ago,...

ASUS Announces 2023 Vivobook Classic Series

On April 7, 2023, ASUS introduced five new models in the 2023 Vivobook Classic series of laptops. The top laptops in the series use the 13th Gen Intel® Core™...
StaffIncremental BloggerLonghorn graphics

Longhorn graphics

eWeek discusses some of Longhorn’s forthcoming graphics features. Here’s a tidbit:

“For starters, it appears that, as some rumors have suggested, the distinction between vertex and pixel shaders will essentially go away. Instead, there will be what Microsoft is calling a Common Shader Core that will contain vertex and pixel shader operations.

Blythe hinted that other kinds of shaders may become available in this framework, though he declined to elaborate as to what those might be. Some possible features that could be added here would be collision detection, and more interestingly, physics calculations. There’s been a fair amount of published work coming out of academia about using GPUs’ floating-point horsepower to model fluid dynamics, and the movement of gaseous clouds (like smoke).

The line between vertex and pixel shader ops and instructions has been blurring for some time already. There was even some speculation that ATI pushed the R400 out to be the R500 because this architecture was going to try to unify its vertex and pixel shader units. It would appear WGF will lay the groundwork for that unification.”

The article is exactly right. As primary processing speeds appear to be approaching the top of the performance curve, graphics processing becomes even more valuable so that display-oriented operations can be off-loaded. But there’s more to it than this. Graphics processors (GPUs) could provide powerful enhancements to new areas such as in vision processing. Near term, there will probably be rewrites of many imaging/Photoshop-like apps that will be able to discard preview options and instead be able to render user changes in real time. Want to apply a Gaussian blur? Click. It’s done.

Also, GPUs make take us into the world of floating point. Just as we’ve long since left the days of 4-bit color, we may be on the cusp of transitioning away from integer-based arithmetic to floating point. This is interesting to me because over the years, some great developers have given me advise to stay away from floating point. Not that it’s bad. But that it’s not the most optimal way of doing things. If you know the extent, you can scale everything down to an integer-based system that’ll be faster. Likewise, some of the compiler masters I’ve known have told me “Give me a processor where the floating point is faster than integer and I’ll show you a poorly designed chip.” But these words of advice are not quite as valuable as they used to be. The reason is that the perceivable difference in performance between floating point and integer math is disappearing. So the incentive to bypass floating point isn’t there. Plus, potentially you have all the richness of floating point that you can preserve, which potentially could be used later. For instance, images stored as floating point pixel values could potentially have more valuable data than their integer based cousins.

For all the speed gains in “rendering-oriented” apps that GPUs provide, the challenge as I see it is to provide the same type of matrix/math horsepower to image analysis pipelines–or potentially for other massive array data. For instance, what if you could extract 3D coordinates and shaded surfaces from multiple cameras or multiple “color” spaces in real-time using the GPU? Or what about tracking points of interests in real time on people? Faster processors are helpful, but highly-math-optimized GPUs might provide amazing new practical possibilities to computer vision.

Some of this is doable today, but you wind up writing to a subset of machines. There’s nothing like the benefits of an “OS-standard” to reshuffle what is practical. Longhorn appears to be on the right track to opening up a exciting world of possibilities.

One other issue comes up along these lines: Is there value in recognition processing units (RPUs) in general? What would the OS be like in terms of services if we could leverage individual speech, vision, and handwriting recognition processors? I wonder. We may be climbing an abstraction ladder over the next few years that may seem too costly and inefficient to us old-timers, but actually is perfectly tuned to today’s and tomorrow’s realities–opening up an exciting new world of application possibilities.

As I see it, this is the challenge of Longhorn developers. They have the opportunity to not only incrementally improve the OS and make our lives a bit better. But they also have the opportunity to provide services that have been traditionally thought of as luxuries, but as a result of numerous evolutions in the hardware, now open up opportunities to developers that inspire revolutionary thinking and can have tremendous impact on how people use computers.

Loren
Lorenhttp://www.lorenheiny.com
Loren Heiny (1961 - 2010) was a software developer and author of several computer language textbooks. He graduated from Arizona State University in computer science. His first love was robotics.

Latest news

Related news