GeForce 8 Cards to Gain PhysX Engine Support

By Chris Faylor, Feb 14, 2008 9:25am PST Hardware manufacturer Nvidia, which just purchased physics technology developer AGEIA, is porting AGEIA's PhysX engine software to run on its GeForce 8 cards, according to The Tech Report.

During a financial call, Nvidia CEO Jen-Hseun Huang revealed that the ported engine will bring enhanced physics capabilities to all of the company's existing GeForce 8 cards, as it will be programmed in CUDA.

"Finally [developers are] able to get a physics engine accelerated into a very large population of gamers," explained Huang. "[It's] just gonna be a software download. Every single GPU that is CUDA-enabled will be able to run the physics engine when it comes...Every one of our GeForce 8-series GPUs runs CUDA."

At the time of the AGEIA purchase, Nvidia noted its intent to integrate PhysX support into its products, but did not specify any details. In light of today's revelation, Huang expects to see increased sales of the Nvidia cards, especially to those equipped with SLI slots.

"It might, and probably will, encourage people to buy a second GPU for their SLI slot," he said. "And for the highest-end gamer, it will encourage them to buy three GPUs. Potentially two for graphics and one for physics, or one for graphics and two for physics."

Click here to comment...



See All Comments | 27 Threads | 145 Comments
  • This is great news!
    This will open the door for physic hardware. Hopefully we soon see a standard physics API (directPhysX?).
    before this I was pretty sure we'd see physics hardware die, and game physics advancement climb the same slow path as it has been going, waiting for people to buy more CPU cores, and have more memory bandwidth.
    Once you've got a standard API you can then can start having 3rd party companies make dedicated physics hardware (or maybe it'll stay just on the graphics card).

    In the end I hope this leads to more complex physics interactions in games, I love that shit.
    downside is that hardware might cost more, but I don't really care that much. If I did, I'd be a console guy.

    It also make a lot of sense to do physics on the card. You've already got the object polygons in the card memory, so you don't need to eat up more bus bandwidth transferring them again. The question is though, how efficiently you can get the updated object vectors back out of the card.
    I'd love to read a white paper about all the technical hurdles they are going to have to deal with.

    Thread Truncated. Click to see all 2 replies.