City of Villains To Support PPU

15

AGEIA today announced that Cryptic Studios will update City of Villains to make use of the PhysX physics accelerator cards that are coming out soon. The PPU support will mainly help with the game's particle effects, with the press release listing a few examples.

From The Chatty
  • reply
    March 21, 2006 11:09 AM

    someone tell me what to think about this. i'm too unitiated to research it myself.

    honestly, I don't want to spend an extra $50 - $100 on something like this.

    • reply
      March 21, 2006 11:12 AM

      i seem to be one of the few people around here looking forward to accelerated physics, at least the potential. but developer support seems to be meager.

      • reply
        March 21, 2006 11:17 AM

        I think the nVidia GPU accelerated physics route is way more likely to succeed because it's free, there's just no way I'm plonking the cash down for yet another add on card without great reason (and CoH sure isn't it). As others have said, this Ageia PhysX card still lacks a GLQuake like the Voodoo got.

        • reply
          March 21, 2006 12:19 PM

          So as I understand it, GPU accelerated physics provides effects only, i.e. non-gameplay-changing stuff. The game world can affect the physically-modelled effects, but not vice versa.

          My impression was that the AGEIA product was an engine that could do gameplay physics. (Although it's not clear that City of Villains is going to be using it for that right off.) Anyone know if I've got the wrong idea here?

          Anyway, sure it's too early for us mere mortals to be shelling out money for a physics accelerator card, but in the abstract the idea of an accelerator for _gameplay_ physics interests me more than the idea of better ways to model visual effects.

          It's cool that AGEIA is providing a library that can be used for non-accelerated physics modelling, just like OpenGL could be used with or without hardware acceleration. Makes it easier to adopt. However, unless they've got some real tricks up their sleeves, it sure seems like they need to get a market leader like Havok to add backend support for their card, if they want to go places.

          • reply
            March 21, 2006 12:55 PM

            Physics is physics, the end.

            The developers are the ones who have to make games where the physics affects profoundly the gameplay, or not.

            • reply
              March 21, 2006 1:03 PM

              Nope. If you don't have the appropriate feedback path to get the data from the hardware accelerator back to the main CPU, you can "develop" all you like but the accelerated physics ain't going to affect the gameplay in any way other than visual effects.

              • reply
                March 21, 2006 1:09 PM

                A bit of surfing provides the following quote about Havok FX (the "GPU accelerated physics" solution):

                Games will continue to use Havok Physics to provide scalable game-play physics with the typical “twitch” response times required to make physics fun and well-integrated with other core game components on the CPU. But Havok FX will be able to layer on top of that many 1000’s of objects (or organic effects based on clouds of objects and particles) that can be affected “downstream” by the game-play physics. There will be some limited feedback from the GPU to the CPU, but this will be lower priority and in general this is what allows the effects to be done extremely quickly on the GPU and in parallel to the game physics.

                http://www.firingsquad.com/features/havok_fx_interview/page2.asp

                So it sounds like Havok Physics (running on the CPU) handles the gameplay-affecting physics, and Havok FX (the "GPU accelerated physics") adds some physically modelled special effects.

              • reply
                March 21, 2006 1:51 PM

                well what, do you think that the physics processor is doing all the computations then placing the information... where? It's obviously being stored in main memory, which is accessible by the CPU. It certainly wouldn't be shipped directly to the video card, what is the video card going to do with it?

                • reply
                  March 21, 2006 2:06 PM

                  what is the video card going to do with it?

                  Well, if you're physically modelling special effects, I would assume that the video card is going to render it.

                  I'm not an expert about any of this, but do you see some other interpretation for the quote I posted above?


                  Also, to avoid confusion, let me reiterate that this is talking about Havoc FX, not about AEGIA.

          • reply
            March 21, 2006 5:17 PM

            The problem is the reason graphics cards can be so fast is they're a fire and forget system. They don't return any data to the game at all (and when they do there is generally performance hits to do that since the nature of the beast is that it's designed to be a one-way communications process).

            With a physics card you have to wait for data to finish so your game logic can handle collisions. Is the benefit of having dedicated circuits more than the cost of shifting all that data back and forth across the bus instead of having some SIMD code work in cache memory? For now, possibly, but how long until that nice physics card becomes a bottleneck again as CPU speeds increase?

            Your CPU doesn't have to wait for the graphics card to finish, but it would have to for physics processing.

            • reply
              March 21, 2006 6:41 PM

              Yar. The implied problem there is not in the specific latency numbers for shuffling the data around, but rather that once you go down this road at all, you need to embrace multithreaded programming to efficiently do things other than thumb-twiddling while the physics coprocessor does its thing.

              Of course, everyone is on board now with multicore as the wave of the future -- since they can't figure out anything else productive to do with all those darn transistors -- so multithreaded programming is going to be become de rigeur eventually. (Although, by that point, will it still be a win to have a specialized physics coprocessor rather than just dumping the same work on one of your general-purpose cores? Hmm.)

        • reply
          March 21, 2006 12:25 PM

          Ah, GLQuake... that takes me back.

          GLQuake and Tombraider were the two killer apps for the Voodoo board for me. That 300 dollars was worth every penny.

      • reply
        March 21, 2006 11:22 AM

        I'm one of the people looking forward to it too.

        It's the main reason I bought a 2 x PCIe board despite never wanting to Sli graphics. According to an AGEIA interview I read once there will be an eventual PCIe version that'll be more powerful than the PCI version that they'll release first.

      • reply
        March 21, 2006 2:18 PM

        I agree with Maarten. I hope Ageia succeeds so we can get more realistic games. I just don't think a gpu will be able to do it as well. I do hope the price drops though $200-$350 for a feature few games support will be a hard sell.

    • reply
      March 21, 2006 8:46 PM

      What to think about it: If you buy it it's just another line to add to forum epenis sigs. You know like the douches who buy raptors to load their os that 1 sec extra since that is the longest second ever.

      Enhanced physics in a game = less thought into actual game development of a plotline/storyline. Something for sites to review and skim over that the game total sucks nuts but otherwise it has the best(of the weeks releases) throwable _insert object_ that just runs so much better if you have the card in your system.

      • reply
        March 21, 2006 8:59 PM

        Enhanced physics in a game = less thought into actual game development of a plotline/storyline.

        Um... I don't quite see the correlation there. If this was the case, then you could say the same about increased polygon counts, or better AI, or even multiplayer.

Hello, Meet Lola