close
    • chevron_right

      Review: AMD’s Radeon RX 7700 XT and 7800 XT are almost great

      news.movim.eu / ArsTechnica · Wednesday, 6 September - 13:00

    AMD's Radeon RX 7800 XT.

    Enlarge / AMD's Radeon RX 7800 XT. (credit: Andrew Cunningham)

    Nearly a year ago, Nvidia kicked off this GPU generation with its GeForce RTX 4090 . The 4090 offers unparalleled performance but at an unparalleled price of $1,600 (prices have not fallen). It's not for everybody, but it's a nice halo card that shows what the Ada Lovelace architecture is capable of. Fine, I guess.

    The RTX 4080 soon followed, along with AMD's Radeon RX 7900 XTX and XT . These cards also generally offered better performance than anything you could get from a previous-generation GPU, but at still-too-high-for-most-people prices that ranged from between $900 and $1,200 (though all of those prices have fallen by a bit). Fine, I guess.

    By the time we got the 4070 Ti launch in May, we were getting down to the level of performance that had been available from previous-generation cards. These GPUs offered a decent generational jump over their predecessors (the 4070 Ti performs kind of like a 3090, and the 4070 performs kind of like a 3080). But those cards also got big price bumps that took them closer to the pricing levels of the last-gen cards they performed like. Fine, I guess.

    Read 25 remaining paragraphs | Comments

    • chevron_right

      Nvidia wants to buy CPU designer Arm—Qualcomm is not happy about it

      Jim Salter · news.movim.eu / ArsTechnica · Friday, 12 February, 2021 - 22:26 · 1 minute

    Some current Arm licensees view the proposed acquisition as highly toxic.

    Enlarge / Some current Arm licensees view the proposed acquisition as highly toxic. (credit: Aurich Lawson / Nvidia)

    In September 2020, Nvidia announced its intention to buy Arm, the license holder for the CPU technology that powers the vast majority of mobile and high-powered embedded systems around the world.

    Nvidia's proposed deal would acquire Arm from Japanese conglomerate SoftBank for $40 billion—a number which is difficult to put into perspective. Forty billion dollars would represent one of the largest tech acquisitions of all time, but 40 Instagrams or so doesn't seem like that much to pay for control of the architecture supporting every well-known smartphone in the world, plus a staggering array of embedded controllers, network routers, automobiles, and other devices.

    Today’s Arm doesn’t sell hardware

    Arm's business model is fairly unusual in the hardware space, particularly from a consumer or small business perspective. Arm's customers—including hardware giants such as Apple, Qualcomm, and Samsung—aren't buying CPUs the way you'd buy an Intel Xeon or AMD Ryzen. Instead, they're purchasing the license to design and/or manufacture CPUs based on Arm's intellectual property. This typically means selecting one or more reference core designs, putting several of them in one system on chip (SoC), and tying them all together with the necessary cache and other peripherals.

    Read 9 remaining paragraphs | Comments

    index?i=L61f4fggXdo:_QalKYyC2ZI:V_sGLiPBpWUindex?i=L61f4fggXdo:_QalKYyC2ZI:F7zBnMyn0Loindex?d=qj6IDK7rITsindex?d=yIl2AUoC8zA
    • chevron_right

      $340,000 of Nvidia RTX 3090 graphics cards were stolen in China

      Jim Salter · news.movim.eu / ArsTechnica · Monday, 7 December, 2020 - 22:25 · 1 minute

    A photo of a box truck has been photoshopped to include The Grinch stealing a computer component from it.

    Enlarge / The GPU Grinch doesn't care about your lists or whether you've been naughty or nice. (credit: Aurich Lawson / Dr. Seuss / GettyImages )

    Some time last week, thieves stole a large number of Nvidia-based RTX 3090 graphics cards from MSI's factory in mainland China. The news comes from twitter user @GoFlying8, who posted what appears to be an official MSI internal document around the theft this morning, along with commentary from a Chinese language website.

    Roughly translated—in other words, OCR scanned, run through Google Translate, and with the nastiest edges sawn off by yours truly—the MSI document reads something like this:

    Ensmai Electronics (Deep) Co., Ltd.
    Announcement
    Memo No. 1-20-12-4-000074
    Subject: Regarding the report theft of the graphics card, it is appropriate to reward

    Explanation:

    1. Recently, high unit price display cards produced by the company have been stolen by criminals. The case has now been reported to the police. At the same time, I also hope that all employees of the company will actively and truthfully report this case.
    2. Anyone providing information which solves this case will receive a reward of 100,000 yuan. The company promises to keep the identity of the whistleblower strictly confidential.
    3. If any person is involved in the case, from the date of the public announcement, report to the company's audit department or the head of the conflicting department. If the report is truthful and and assists in the recovery of the missing items, the company will report to the police, but request leniency. The law should be dealt with seriously.
    4. With this announcement, I urge my colleagues to be professional and ethical, and to be disciplined, learn from cases, and be warned.
    5. Reporting Tel: [elided]

    Reporting mailbox of the Audit Office: [elided]
    December 4, 2020

    There has been some confusion surrounding the theft in English speaking tech media; the MSI document itself dates to last Friday, and does not detail how many cards were stolen or what the total value was. The surrounding commentary—from what seems to be a Chinese news app—claims that the theft was about 40 containers of RTX 3090 cards, at a total value of about 2.2 million renminbi ($336K in US dollars).

    Read 1 remaining paragraphs | Comments

    index?i=4T9UIYSfNGk:0c14GFPXZNE:V_sGLiPBpWUindex?i=4T9UIYSfNGk:0c14GFPXZNE:F7zBnMyn0Loindex?d=qj6IDK7rITsindex?d=yIl2AUoC8zA
    • chevron_right

      Mac mini and Apple Silicon M1 review: Not so crazy after all

      Samuel Axon · news.movim.eu / ArsTechnica · Thursday, 19 November, 2020 - 14:03

    Apple is crazy, right? The Mac just had its best year of sales ever, and Cupertino is hitting the platform with a shock like it hasn’t had in nearly 15 years—back in a time when the Mac was not having such a good year. Apple is beginning the process of replacing industry-standard Intel chips with its own, custom-designed silicon.

    In a way, we're not just reviewing the new Mac mini—a Mac mini is always a Mac mini, right? We're reviewing an ARM-based Mac for the first time. And this is not exactly the same story as all the other ARM machines we've looked at before, like Windows 10 on ARM—a respectable option with some serious tradeoffs.

    Sure, longer battery life and quick waking from sleep are already out there on other ARM computers. But as you may have seen in our hands-on earlier this week , what we're encountering here is also a performance leap—and as you'll also see in this review, a remarkable success at making this new architecture compatible with a large library of what could now, suddenly, be called legacy Mac software.

    Read 84 remaining paragraphs | Comments

    index?i=PXKfOwVE9lU:RgLsTSFzhxg:V_sGLiPBpWUindex?i=PXKfOwVE9lU:RgLsTSFzhxg:F7zBnMyn0Loindex?d=qj6IDK7rITsindex?d=yIl2AUoC8zA
    • chevron_right

      Amazon begins shifting Alexa’s cloud AI to its own silicon

      Jim Salter · news.movim.eu / ArsTechnica · Friday, 13 November, 2020 - 18:07 · 1 minute

    Amazon engineers discuss the migration of 80% of Alexa's workload to Inferentia ASICs in this three-minute clip.

    On Thursday, an Amazon AWS blog post announced that the company has moved most of the cloud processing for its Alexa personal assistant off of Nvidia GPUs and onto its own Inferentia Application Specific Integrated Circuit (ASIC). Amazon dev Sebastian Stormacq describes the Inferentia's hardware design as follows:

    AWS Inferentia is a custom chip, built by AWS, to accelerate machine learning inference workloads and optimize their cost. Each AWS Inferentia chip contains four NeuronCores . Each NeuronCore implements a high-performance systolic array matrix multiply engine, which massively speeds up typical deep learning operations such as convolution and transformers. NeuronCores are also equipped with a large on-chip cache, which helps cut down on external memory accesses, dramatically reducing latency and increasing throughput.

    When an Amazon customer—usually someone who owns an Echo or Echo dot—makes use of the Alexa personal assistant, very little of the processing is done on the device itself. The workload for a typical Alexa request looks something like this:

    1. A human speaks to an Amazon Echo, saying: "Alexa, what's the special ingredient in Earl Grey tea?"
    2. The Echo detects the wake word—Alexa—using its own on-board processing
    3. The Echo streams the request to Amazon data centers
    4. Within the Amazon data center, the voice stream is converted to phonemes (Inference AI workload)
    5. Still in the data center, phonemes are converted to words (Inference AI workload)
    6. Words are assembled into phrases (Inference AI workload)
    7. Phrases are distilled into intent (Inference AI workload)
    8. Intent is routed to an appropriate fulfillment service, which returns a response as a JSON document
    9. JSON document is parsed, including text for Alexa's reply
    10. Text form of Alexa's reply is converted into natural-sounding speech (Inference AI workload)
    11. Natural speech audio is streamed back to the Echo device for playback—"It's bergamot orange oil."

    As you can see, almost all of the actual work done in fulfilling an Alexa request happens in the cloud—not in an Echo or Echo Dot device itself. And the vast majority of that cloud work is performed not by traditional if-then logic but inference—which is the answer-providing side of neural network processing.

    Read 3 remaining paragraphs | Comments

    index?i=_n2eofcSsMA:8Ku24YVOMxI:V_sGLiPBpWUindex?i=_n2eofcSsMA:8Ku24YVOMxI:F7zBnMyn0Loindex?d=qj6IDK7rITsindex?d=yIl2AUoC8zA
    • chevron_right

      Les AMD Radeon 6000 sont officielles : voici ce qu’elles ont dans le ventre

      Jerome Durel · news.movim.eu / JournalDuGeek · Wednesday, 28 October, 2020 - 16:52 · 2 minutes

    Après les nouveaux CPU Ryzen 5000 , AMD a annoncé ce mercredi 28 octobre ses nouveaux GPU Radeon 6000 pour le biais d’une conférence de presse en ligne. Avec cette nouvelle gamme de processeurs graphiques, AMD compte bien reprendre l’avantage sur Nvidia comme elle l’a fait sur Intel avec ces processeurs.

    L’ADN RDNA 2

    La grande nouveauté de ces GPU « BigNavi » est d’être basés sur l’ RNDA 2. Gravées en 7 nm, les puces sont capables de délivrer des performances par Watt en hausse de 54% par rapport la l’architecture RDNA première du nom. En pratique, AMD revendique des performances doublées en moyenne sur une sélection de jeux PC.

    De la même manière que les CPU, ces GPU exploitent désormais un cache unifié baptisée « infinity cache » améliorant sensiblement la bande-passante. Cette dernière est doublée « en pratique » selon AMD par rapport à un bus 386 Gb/s classique.

    Sans surprise, les cartes prennent en charge le Ray Tracing mais AMD a surtout tenu également à mentionner ses efforts pour réduire la latence. Une nouvelle technologie anti-lag, couplée à FreeSync permet de réduire la latence de 37% sur un Fortnite en 4K.

    La série Radeon 6000 en bref

    • Radeon RX 6800 XT

    72 unités de calcul, 2015 MHz (2250 en boost), 128 Mo de cache, 16 Go de GDDR5. Le tout pour 300W. Elle pensée pour le jeu en 4K à 60 fps, et passe les 144 Hz en 1440p de nombreux jeux gourmands.

    Elle sera disponible 18 novembre pour 649 dollars.

    • Radeon RX 6800

    60 unités de calcul, 1815 MHz (2105 en boost), 128 Mo de cache, 16 de GDDR5 pour 250W. Cette carte est présentée comme l’entrée dans la 4K. Comprendre qu’elle est capable d’atteindre 60 ips dans cette définition, mais sera moins à l’aise.

    Elle sera disponible le 18 novembre pour 579 dollars.

    • Elles sont chapeautées par la RX 6900 XT

    80 unités de calcul, 2015 MHz ( 2250 en boost), 128 Mo de cache, 16 Go de GDDR5. Le tout 300W (aussi !). Cette carte est présentée comme « l’expérience du jeux en 4K ultime » dans un format identique à celle de la 6800 XT. Une pique non dissimulée à Nvidia et son encombrante RTX 3090.  Elle est capable de délivrer 150 images par seconde en 4K ultra sur Doom Eternal.

    Elle sera disponible le 8 décembre pour 999 dollars.

    Le Replay

    Les AMD Radeon 6000 sont officielles : voici ce qu’elles ont dans le ventre