• 3 Posts
  • 228 Comments
Joined 2 years ago
cake
Cake day: July 5th, 2023

help-circle
  • Sadly doesn’t seem to be fanless, which imo is a really nice feature when you dont care about high performance. Not sure if in the real world you can find good deals on the snapdragon laptops, but list price is also quite high and that keyboard with touch function keys doesn’t seem great either.

    So in my book that’s still no match for what a macbook air m1/2 offers, which by now are a few years old and can be found for decent prices. They might be aiming at the same market, but aren’t equal.


  • A proper non-Apple Macbook Air equivalent. Because imo for the average user that just browses the internet and does some light office work it seems perfect. And with that I mean:

    • fanless
    • good screen preferably 3:2 or 16:10
    • long battery life
    • unlike the air expandable storage and ideally non soldered ram
    • solid build quality
    • priced at maybe 600-800€?
    • doesn’t have to have the greatest performance

    Tbh i thought we would get it with Intels lunar lake processors, but so far no luck.


  • Yep. Weight is lost through diet, sport might help but can also make you hungry. The main benefit of exercise is better health through increased fitness.

    People should compare how much calories exercising burnes per hour compared to the simple act of e.g. switching sugary drinks for water. Especially when you aren’t fit to begin with, meaning you won’t for example be able to run for hours each week.

    Intermittent fasting definitely is a good method. But it varies for everyone. Imo it helps to start with changing what you groceries you buy. At least to me the further away from the plate you implement caloric reduction the easier it is.


  • If we are talking the manufacturing side, rather than design/software i am very curious to see how SIMC develops. You are absolutely right that there is a big advantage for the second mover, since they can avoid dead ends and already know on an abstract level what is working. And diminishing returns also help make gaps be slightly less relevant.

    However i think we can’t just apply the same timeline to them and say “they have 7nm now” and it took others x years to progress from there to 5nm or 3nm, because these steps include the major shift from DUV to EUV, which was in the making for a very long time. And that’s a whole different beast compared to DUV, where they are also probably still relying on ASML machines for the smallest nodes (although i think producing those domestically is much more feasible). Eventually they’ll get there, but i think this isn’t trivial and will take more than 2 years for sure.

    On the design side vs Nvidia the hyperscalers like Alibaba/Tencent/Baidu or maybe even a smaller newcomer might be able to create something competitive for their specific usecases (like the Google TPUs). But Nvidia isn’t standing still either, so i think getting close to parity will be extremely hard there aswell.


    Of course, the price gap will shrink at the same rate as ROCm matures and customers feel its safe to use AMD hardware for training.

    Well to what degree ROCm matures and closes the gap is probably the question. Like i said, i agree that their hardware seems quite capable in many ways, although my knowledge here is quite limited. But AMD so far hasn’t really shown that they can compete with Nvidia on the software side.


    As far as Intel goes, being slow in my reply helps my point. Just today Intel canceled their next-generation GPU Falcon Shore, making it an internal development step only. As much as i am rooting for them, it will need a major shift in culture and talent for them to right the ship. Gaudi 3 wasn’t successful (i think they didn’t even meet their target of $500mio sales) and now they probably don’t have any release in 2025, assuming Jaguar Lake is 2026 since Falcon Shore was slated for end of this year. In my books that is the definition of being behind more than 1 year, considering they are not even close to parity right now.


  • Yeah. I don’t believe market value is a great indicator in this case. In general, I would say that capital markets are rational at a macro level, but not micro. This is all speculation/gambling.

    I have to concede that point to some degree, since i guess i hold similar views with Tesla’s value vs the rest of the automotive Industry. But i still think that the basic hirarchy holds true with nvidia being significantly ahead of the pack.

    My guess is that AMD and Intel are at most 1 year behind Nvidia when it comes to tech stack. “China”, maybe 2 years, probably less.

    Imo you are too optimistic with those estimations, particularly with Intel and China, although i am not an expert in the field.

    As i see it AMD seems to have a quite decent product with their instinct cards in the server market on the hardware side, but they wish they’d have something even close to CUDA and its mindshare. Which would take years to replicate. Intel wish they were only a year behind Nvidia. And i’d like to comment on China, but tbh i have little to no knowledge of their state in GPU development. If they are “2 years, probably less” behind as you say, then they should have something like the rtx 4090, which was released end of 2022. But do they have something that even rivals the 2000 or 3000 series cards?

    However, if you can make chips with 80% performance at 10% price, its a win. People can continue to tell themselves that big tech always will buy the latest and greatest whatever the cost. It does not make it true.

    But the issue is they all make their chips at the same manufacturer, TSMC, even Intel in the case of their GPUs. So they can’t really differentiate much on manufacturing costs and are also competing on the same limited supply. So no one can offer 80% of performance at 10% price, or even close to it. Additionally everything around the GPU (datacenters, rack space, power useage during operation etc.) also costs, so it is only part of the overall package cost and you also want to optimize for your limited space. As i understand it datacenter building and power delivery for them is actually another limiting factor right now for the hyperscalers.

    Google, Meta and Amazon already make their own chips. That’s probably true for DeepSeek as well.

    Google yes with their TPUs, but the others all use Nvidia or AMD chips to train. Amazon has their Graviton CPUs, which are quite competitive, but i don’t think they have anything on the GPU side. DeepSeek is way to small and new for custom chips, they evolved out of a hedge fund and just use nvidia GPUs as more or less everyone else.



  • I have to disagree with that, because this solution isn’t free either.

    Asking them to regulate their use requires them to build excess capacity purely for those peaks (so additional machinery), to have more inventory in stock, and depending on how manual labor intensive it is also means people have to work with a less reliable schedule. With some processes it might also simply not be able to regulate them up/down fast enough (or at all).

    This problem is simply a function of whether it is cheaper to a) build excess capacity or b) build enough capacity to meet demand with steady production and add battery storage as needed.

    Compared to most manufacturing lines battery tech is relatively simple tech, requries little to no human labor and still makes massive gains in price/performance. So my bet is that it’ll be the cheaper solution.

    That said it is of course not a binary thing and there might be some instances where we can optimize energy demand and supply, but i think in the industry those will happen naturally through market forces. However this won’t be enough to smooth out the gap difference in the timing of supply/demand.


  • It’s a reaction to thinking China has better AI

    I don’t think this is the primary reason behind Nvidia’s drop. Because as long as they got a massive technological lead it doesn’t matter as much to them who has the best model, as long as these companies use their GPUs to train them.

    The real change is that the compute resources (which is Nvidia’s product) needed to create a great model suddenly fell of a cliff. Whereas until now the name of the game was that more is better and scale is everything.

    China vs the West (or upstart vs big players) matters to those who are investing in creating those models. So for example Meta, who presumably spends a ton of money on high paying engineers and data centers, and somehow got upstaged by someone else with a fraction of their resources.


  • I think suffering is relative: has there been a time in recent history of Russia that wasn’t associated with some degree of struggle for the average inhabitant? And there also wasn’t ever a timeframe long enough where any form of healthy opposition could be institutionalized to endure over longer durations.

    Also I’d say that by Western standards even the pre-war living conditions of many Russians would qualify as poor. So their scale is vastly different to ours.



  • Doesn’t look great:

    • No progress with health features, which seem like the most exciting evolution.

    • Who truly needs the larger screen and faster chip. Especially the former will presumably reduce battery life, something that very much matters with watches.

    The company is also working on a new version of its lower-cost Apple Watch SE model, which it last updated in 2022. One idea the company has tested is swapping the aluminium shell for rigid plastic. It’s likely to lower the cost to something that could better rival Samsung’s cheapest watch, the $199 Galaxy Watch FE. The SE currently starts at $249.

    That really doesn’t sound like Apple.







  • golli@lemm.eetoDeutschland@feddit.deKinderschutz? Erst mal vertagt
    link
    fedilink
    Deutsch
    arrow-up
    2
    ·
    edit-2
    8 months ago

    Mir fällt halt subjektiv auf, dass ein relativ hoher Anteil der Kommentare in die letztere Kategorie fallen. In der Summe sind sie deshalb für mich eher ein negativer Beitrag zum Diskurs.

    Deshalb muss sich irgendwas and diesem Zustand ändern oder man schafft das Format ab. Vorausgesetzt natürlich es ist nicht das Ziel Leser mit emotionsgeladener Propaganda zu beeinflussen.

    Ich würde mir z.B. wünschen zumindest immer eine begleitende Einordnung zu bekommen, wer der Autor des Kommentars ist, welche Qualifikationen er mitbringt und warum ihm diese Plattform gewährt wird. Klar man könnte jetzt sagen, dass der Leser anhand des Names selbst Recherche betreiben kann, aber ich finde das sollte nicht nötig sein.


  • Ich wollte den Kommentar tatsächlich auch selbst posten. Denke mal zum inhaltlichen muss garnicht viel gesagt werden, was nicht eh schon hier und in unzähligen anderen Posts zum Thema Chatkontrolle diskutiert wurde. Aber was haltet ihr eigentlich von der Form des “Kommentar” als generelles Format?

    Auch wenn es natürlich manche Kommentare gibt, denen ich zustimme und tendenziell noch mehr die ich ähnlich problematisch finde wie diesen hier, frage ich mich schon länger ob dieses Format nützlich ist oder abgeschafft gehört (zumindest in der derzeitigen Form). Mir fällt regelmäßig auf, dass sie eher populistisch verfasst sind, Fakten falsch, sehr einseitig oder verkürzt dargestellt werden und es keinerlei Informationen zum Autor gibt außer den Namen.

    Daduch bieten die Medien letztlich dem Verfasser eine Bühne seine Meinung zu verbreiten und es entfällt die journalistische Aufbereitung (faktencheck etc), die man eigentlich sich von einer seriösen Quelle erhöffen würde. Ab und zu gibt es dann immerhin noch einen zweiten Kommentar, der eine Gegenposition darstellt, aber dabei ist mir z.B. aufgefallen, dass diese oft nicht gleich prominent beworben werden.



  • You are right, Apple also has some legit professional staff. And if the person using it gets paid a lot, then a one time hardware purchase becomes negligible.

    Accurate fine motor control and even basic stuff like typing does seem not quite fleshed out, so that is indeed an issue. But I don’t think it’s a deal breaker that you can’t do long shifts with it, since you’d probably only use it for certain tasks.

    Even more of a niche, but I could see it for something like architects. Both for work and to maybe even present to clients.