close
    • chevron_right

      Redis’ license change and forking are a mess that everybody can feel bad about

      news.movim.eu / ArsTechnica · Monday, 1 April - 17:47

    AWS data centers built right next to suburban cul-de-sac housing

    Enlarge / An Amazon Web Services (AWS) data center under construction in Stone Ridge, Virginia, in March 2024. Amazon will spend more than $150 billion on data centers in the next 15 years. (credit: Getty Images)

    Redis , a tremendously popular tool for storing data in-memory rather than in a database, recently switched its licensing from an open source BSD license to both a Source Available License and a Server Side Public License (SSPL).

    The software project and company supporting it were fairly clear in why they did this. Redis CEO Rowan Trollope wrote on March 20 that while Redis and volunteers sponsored the bulk of the project's code development, "the majority of Redis’ commercial sales are channeled through the largest cloud service providers, who commoditize Redis’ investments and its open source community." Clarifying a bit, "cloud service providers hosting Redis offerings will no longer be permitted to use the source code of Redis free of charge."

    Clarifying even further: Amazon Web Services (and lesser cloud giants), you cannot continue reselling Redis as a service as part of your $90 billion business without some kind of licensed contribution back.

    Read 5 remaining paragraphs | Comments

    • chevron_right

      Amazon unleashes Q, an AI assistant for the workplace

      news.movim.eu / ArsTechnica · Wednesday, 29 November - 17:13

    The Amazon Q logo.

    Enlarge / The Amazon Q logo. (credit: Amazon)

    On Tuesday, Amazon unveiled Amazon Q , an AI chatbot similar to ChatGPT that is tailored for corporate environments. Developed by Amazon Web Services (AWS), Q is designed to assist employees with tasks like summarizing documents, managing internal support tickets, and providing policy guidance, differentiating itself from consumer-focused chatbots. It also serves as a programming assistant.

    According to The New York Times , the name "Q" is a play on the word “question" and a reference to the character Q in the James Bond novels, who makes helpful tools. (And there's apparently a little bit of Q from Star Trek: The Next Generation thrown in, although hopefully the new bot won't cause mischief on that scale.)

    Amazon Q's launch positions it against existing corporate AI tools like Microsoft's Copilot , Google's Duet AI , and ChatGPT Enterprise . Unlike some of its competitors, Amazon Q isn't built on a singular AI large language model (LLM). Instead, it uses a platform called Bedrock, integrating multiple AI systems, including Amazon's Titan and models from Anthropic and Meta .

    Read 5 remaining paragraphs | Comments

    • chevron_right

      How we host Ars, the finale and the 64-bit future

      news.movim.eu / ArsTechnica · Wednesday, 9 August, 2023 - 13:00

    How we host Ars, the finale and the 64-bit future

    Enlarge (credit: Aurich Lawson | Getty Images)

    Greetings, dear readers, and congratulations—we've reached the end of our four-part series on how Ars Technica is hosted in the cloud, and it has been a journey. We've gone through our infrastructure , our application stack , and our CI/CD strategy (that's "continuous integration and continuous deployment"—the process by which we manage and maintain our site's code).

    Now, to wrap things up, we have a bit of a grab bag of topics to go through. In this final part, we'll discuss some leftover configuration details I didn't get a chance to dive into in earlier parts—including how our battle-tested liveblogging system works (it's surprisingly simple, and yet it has withstood millions of readers hammering at it during Apple events). We'll also peek at how we handle authoritative DNS.

    Finally, we'll close on something that I've been wanting to look at for a while: AWS's cloud-based 64-bit ARM service offerings. How much of our infrastructure could we shift over onto ARM64-based systems, how much work will that be, and what might the long-term benefits be, both in terms of performance and costs?

    Read 50 remaining paragraphs | Comments

    • chevron_right

      How we host Ars Technica in the cloud, part two: The software

      news.movim.eu / ArsTechnica · Wednesday, 26 July, 2023 - 13:00 · 1 minute

    Welcome aboard the orbital HQ, readers!

    Enlarge / Welcome aboard the orbital HQ, readers! (credit: Aurich Lawson | Getty Images)

    Welcome back to our series on how Ars Technica is hosted and run! Last week, in part one , we cracked open the (virtual) doors to peek inside the Ars (virtual) data center. We talked about our Amazon Web Services setup, which is primarily built around ECS containers being spun up as needed to handle web traffic, and we walked through the ways that all of our hosting services hook together and function as a whole.

    This week, we shift our focus to a different layer in the stack—the applications we run on those services and how they work in the cloud. Those applications, after all, are what you come to the site for; you’re not here to marvel at a smoothly functioning infrastructure but rather to actually read the site. (I mean, I’m guessing that’s why you come here. It’s either that or everyone is showing up hoping I’m going to pour ketchup on myself and launch myself down a Slip-'N-Slide , but that was a one-time thing I did a long time ago when I was young and needed the money.)

    How traditional WordPress hosting works

    Although I am, at best, a casual sysadmin, having hung up my pro spurs a decade and change ago, I do have some relevant practical experience hosting WordPress. I’m currently the volunteer admin for a half-dozen WordPress sites, including Houston-area weather forecast destination Space City Weather (along with its Spanish-language counterpart Tiempo Ciudad Espacial ), the Atlantic hurricane-focused blog The Eyewall , my personal blog, and a few other odds and ends.

    Read 55 remaining paragraphs | Comments

    • chevron_right

      Setting our heart-attack-predicting AI loose with “no-code” tools

      news.movim.eu / ArsTechnica · Tuesday, 9 August, 2022 - 13:00 · 1 minute

    Ahhh, the easy button!

    Enlarge / Ahhh, the easy button! (credit: Aurich Lawson | Getty Images)

    This is the second episode in our exploration of "no-code" machine learning. In our first article , we laid out our problem set and discussed the data we would use to test whether a highly automated ML tool designed for business analysts could return cost-effective results near the quality of more code-intensive methods involving a bit more human-driven data science.

    If you haven't read that article, you should go back and at least skim it . If you're all set, let's review what we'd do with our heart attack data under "normal" (that is, more code-intensive) machine learning conditions and then throw that all away and hit the "easy" button.

    As we discussed previously, we're working with a set of cardiac health data derived from a study at the Cleveland Clinic Institute and the Hungarian Institute of Cardiology in Budapest (as well as other places whose data we've discarded for quality reasons). All that data is available in a repository we've created on GitHub, but its original form is part of a repository of data maintained for machine learning projects by the University of California-Irvine. We're using two versions of the data set: a smaller, more complete one consisting of 303 patient records from the Cleveland Clinic and a larger (597 patient) database that incorporates the Hungarian Institute data but is missing two of the types of data from the smaller set.

    Read 38 remaining paragraphs | Comments

    • chevron_right

      Amazon to roll out tools to monitor factory workers and machines

      Financial Times · news.movim.eu / ArsTechnica · Tuesday, 1 December, 2020 - 19:55

    Amazon to roll out tools to monitor factory workers and machines

    Enlarge (credit: Emanuele Cremaschi | Getty Images)

    Amazon is rolling out cheap new tools that will allow factories everywhere to monitor their workers and machines, as the tech giant looks to boost its presence in the industrial sector.

    Launched by Amazon’s cloud arm AWS, the new machine learning-based services include hardware to monitor the health of heavy machinery, and computer vision capable of detecting whether workers are complying with social distancing.

    Amazon said it had created a two-inch, low-cost sensor—Monitron—that can be attached to equipment to monitor abnormal vibrations or temperatures and predict future faults.

    Read 14 remaining paragraphs | Comments

    index?i=mE4VKMqM3vg:dX_Olc1PM_0:V_sGLiPBpWUindex?i=mE4VKMqM3vg:dX_Olc1PM_0:F7zBnMyn0Loindex?d=qj6IDK7rITsindex?d=yIl2AUoC8zA
    • chevron_right

      Amazon begins shifting Alexa’s cloud AI to its own silicon

      Jim Salter · news.movim.eu / ArsTechnica · Friday, 13 November, 2020 - 18:07 · 1 minute

    Amazon engineers discuss the migration of 80% of Alexa's workload to Inferentia ASICs in this three-minute clip.

    On Thursday, an Amazon AWS blog post announced that the company has moved most of the cloud processing for its Alexa personal assistant off of Nvidia GPUs and onto its own Inferentia Application Specific Integrated Circuit (ASIC). Amazon dev Sebastian Stormacq describes the Inferentia's hardware design as follows:

    AWS Inferentia is a custom chip, built by AWS, to accelerate machine learning inference workloads and optimize their cost. Each AWS Inferentia chip contains four NeuronCores . Each NeuronCore implements a high-performance systolic array matrix multiply engine, which massively speeds up typical deep learning operations such as convolution and transformers. NeuronCores are also equipped with a large on-chip cache, which helps cut down on external memory accesses, dramatically reducing latency and increasing throughput.

    When an Amazon customer—usually someone who owns an Echo or Echo dot—makes use of the Alexa personal assistant, very little of the processing is done on the device itself. The workload for a typical Alexa request looks something like this:

    1. A human speaks to an Amazon Echo, saying: "Alexa, what's the special ingredient in Earl Grey tea?"
    2. The Echo detects the wake word—Alexa—using its own on-board processing
    3. The Echo streams the request to Amazon data centers
    4. Within the Amazon data center, the voice stream is converted to phonemes (Inference AI workload)
    5. Still in the data center, phonemes are converted to words (Inference AI workload)
    6. Words are assembled into phrases (Inference AI workload)
    7. Phrases are distilled into intent (Inference AI workload)
    8. Intent is routed to an appropriate fulfillment service, which returns a response as a JSON document
    9. JSON document is parsed, including text for Alexa's reply
    10. Text form of Alexa's reply is converted into natural-sounding speech (Inference AI workload)
    11. Natural speech audio is streamed back to the Echo device for playback—"It's bergamot orange oil."

    As you can see, almost all of the actual work done in fulfilling an Alexa request happens in the cloud—not in an Echo or Echo Dot device itself. And the vast majority of that cloud work is performed not by traditional if-then logic but inference—which is the answer-providing side of neural network processing.

    Read 3 remaining paragraphs | Comments

    index?i=_n2eofcSsMA:8Ku24YVOMxI:V_sGLiPBpWUindex?i=_n2eofcSsMA:8Ku24YVOMxI:F7zBnMyn0Loindex?d=qj6IDK7rITsindex?d=yIl2AUoC8zA