advertisement

Futuremark Loves NVidia

by Steve Gibson, Jun 02, 2003 4:00pm PDT
Related Topics – NVidia, Futuremark

Here is your hardware scene soap opera of the day... So it looks like Futuremark suddenly doesnt feel like NVidia is cheating on 3DMark03 after all now? Yeah.. check out the full statement on HardOCP and here's a little summary and translation:

FutureMark Statement: Futuremark now has a deeper understanding of the situation and NVIDIA's optimization strategy. In the light of this, Futuremark now states that NVIDIA's driver design is an application specific optimization and not a cheat. NVIDIA Statement: NVIDIA works closely with developers to optimize games for GeForceFX. These optimizations (including shader optimizations) are the result of the co-development process. This is the approach NVIDIA would have preferred also for 3DMark03. Translation VIA HardOCP: FutureMark reneges on previous statements and confirms NVIDIA was not cheating on their benchmark and NVIDIA will not take a legal action against FutureMark that would bankrupt them.




Comments

53 Threads* | 183 Comments




  • Programs made specifically to stress test video cards should not be tweaked by the driver/hardware makers at all. The consumer wants to have a fair competition between all cards, so therefore the tests and the data thrown at the tests should be the same.

    Take for an example, if you wanted to find the fastest car to go from 0 to 60mph. Would it be fair for one car maker to substitute the normal gas for a slightly better performing gas? No.

    I also would rather see them work with the developers to make various vertex/pixel/fragment programmers better on the source end rather than inside the driver level on a specific case by case basis. The card and the drivers should perform specifically how we, the developers, instruct them to perform. Changing visual quality in regards to performance should be the developers' decision and not theirs.




  • It makes you wonder what good setting up clip planes at the edges of the view frustum (which is what I can make of whats being done according to the accusations) woudl do, seeing as the scene is clipped to the view frustum planes anyhow.

    It sounds like a completely stupid thing to do, as it wouldnt receive any speed up at all (rather it would be a slow down, as youd be checking against twice as many clip planes, and getting the same clipping done).

    As for the incorrect fragment program results, that could easily be put down to the drivers TRYING to optimize the fragment programs, but making mistakes and altering the output.

    And before anyone shouts that Im an nVidia fanboy, Ive purchased 1 Radeon9500, and 1 Radeon9600...except that the radeon9500 died, and ATI havent shipped the 9600 after 2 months of being on order....