NVIDIA presenta la TITAN RTX - [NEWS]

Pagina 2 di 2
prima
1 2
Visualizzazione dei risultati da 11 a 14 su 14
  1. #11
    pebibyte
    Registrato
    Jun 2005
    Messaggi
    4,371

    Predefinito

    Ero curioso di capire che differenza ci fosse tra le varie precisioni durante il computing e ho letto un po' aggiro...
    HenryEckstein2 months ago NVIDIA Has Total B.S. Specifications!
    REAL TeraFLOPS are FULL 64-bits WIDE Double Precision and not this STOOOPID 16 bits Half-precision maths statistic!
    I've got 475+ Full 64-bits TeraFLOPS FULL Double Precision SUSTAINED per motherboard (NOT just PEAK performance!)! When you have THAT sort of horsepower (equivalent to 1.9 PetaFLOPS 16-bit Half-Precision and 950 TeraFLOPS 32-bit Single Precision math), THEN you can talk about having 130 TeraFLOPS on your boards!
    Hey NVIDIA! Give us the REAL 64-bits wide SUSTAINED Double Precision math performance specification and NOT THE FAKE SPECIFICATION!


    HenryEckstein Sam J. Dennis2 months ago The problem is, that scientists and ENGINEERS ...ABSOLUTELY NEED 64-bits wide computing AND NOT 32-bits single-precision or 16-bits half-precision math. NO! They need TRULY ACCURATE at least 64-bit integers and 64-bits-wide Real Number processing and NOT half-baked FAKE marketing terms that have NOTHING to do with the real world!
    My friend who owns a high-end aerospace firm got SOOOOO sick and tired of the CPU/GPU fakery, they bought an entire microcircuit engineering team and then went ahead and built their own custom CPU's/GPU's on GaAs (Gallium Arsenide) and GaN (Gallium Nitride) substrates which is WHY their Numeric Processor Arrays are doing sustained 60 GHz and as fast as TWO TERAHERTZ at a full 64-bits wide (Double Precision) AND EVEN 128-bits wide (Quad Precision) Signed and Unsigned Fixed Point and Floating Point plus Integers Math!
    Now they have 64-bits wide 475 TeraFLOPS and 128-bits wide 235 TeraFLOP motherboards for their high-end Aerospace CAD/CAM/FEA/Video work!


    Come da previsoni va al solito poco piu' veloce della relativa Ti.
    https://lambdalabs.com/blog/titan-rt...ow-benchmarks/

    Pare che per applicativi scientifici SERI (spazio e buona parte della medicina) la Doppia precisione sia necessaria. Mentre per reti neuronali (deep learning) e affini sia sufficiente il Float.
    DP gives you more significant figures, and therefore reduces the effect of rounding errors. (Rounding errors can crop up where not expected. For example the number 0.1, when converted to binary is actually infinitely long - it is a repeating number, like 1/3=0.3.... So, 0.1 will always be rounded off. Add enough 0.1s together, and the rounding errors will combine and you will get a noticeable discrepancy)

    Certain types of computation, where the result of one computation is used in the next, over and over can accumulate errors. This is commonly found in scientific simulations, where the next step in the simulation depends on the previous step solution.

    There also certain types of computation which by their nature require high precision at an intermediate step - sometimes it is possible to use a different method to work around this requirement, but sometimes you just have to use higher precision calcualtions. This is a problem that comes up in mathematical techniques like FFTs and matrix inversions.

    For gaming, the need for precision and computational stability is low, and therefore DP isn't required. This is the case with most graphics. Although, I wrote a medical imaging app which had a volume rendering function - true volume rendering requires a render pass for each plane of voxels at each distance away from you. This app would use 200-300 pass rendering, with alpha blending on each pass. This worked fine with single precision floats, but with half precision (16 bit), you could start to see faint artefacts appearing where the rounding errors had built up.

    In general, scientific and other industrial/engineering work uses DP to ensure that rounding errors don't pile-up and create a noticeable error. Financial work shouldn't use floats at all (not even DP) because a fixed point system which can avoid rounding errors completely is better.
    Giustamente , altrimenti col cavolo che te l'avrebbero data a 2500e.
    I 130TF di tensor a mezza precisione:
    https://en.wikipedia.org/wiki/List_o...essingPower-73
    Quindi si passa alla precisione mista per compensare gli errori della mezza precisione:
    https://hackernoon.com/rtx-2080ti-vs...0-761d8f615d7f

  2. #12
    Amministratore L'avatar di giampa
    Registrato
    May 2002
    Località
    Pisa
    Età
    59
    Messaggi
    23,859

    Predefinito

    un bello sbattimento, grazie per la tua precisazione


    "Scusate, ma se quest'anno in Texas ci avete spedito questo deficiente, vuol dire che c'è speranza per tutti?"

  3. #13
    Nexthardware Staff L'avatar di brugola.x
    Registrato
    Feb 2007
    Località
    1/2 lombardo
    Età
    50
    Messaggi
    18,799
    configurazione

    Predefinito

    mah, sarei curioso di sapere quante saranno in grado di piazzarne
    "studiata" per la ricerca... altro che crowdfunding !!! costa un capitale!

  4. #14
    Daniele L'avatar di Trattore
    Registrato
    Jul 2011
    Località
    provincia Lecco
    Età
    48
    Messaggi
    9,788
    configurazione

    Predefinito

    Originariamente inviato da brugola.x
    mah, sarei curioso di sapere quante saranno in grado di piazzarne
    "studiata" per la ricerca... altro che crowdfunding !!! costa un capitale!

    Ciao Gerri, mi sa che gli acquirenti non mancheranno nonostante il prezzo



Pagina 2 di 2
prima
1 2

Informazioni Thread

Users Browsing this Thread

Ci sono attualmente 1 utenti che stanno visualizzando questa discussione. (0 utenti e 1 ospiti)

Tags

Regole d'invio

  • Non puoi inserire discussioni
  • Non puoi inserire repliche
  • Non puoi inserire allegati
  • Non puoi modificare i tuoi messaggi
  •  
nexthardware.com - © 2002-2022