IRC channel logs

2023-01-26.log

back to list of logs

<zimoun>hi!
<drakonis>'ello
<civodul>o/
<rekado>the tensorflow problem may solve itself: https://www.semianalysis.com/p/nvidiaopenaitritonpytorch
<rekado>the claims in this article give me hope
<rekado>has anyone used https://github.com/openai/triton before?
<rekado>“The aim of Triton is to provide an open-source environment to write fast code at higher productivity than CUDA”
<civodul>uh
<civodul>reading this kind of article about market shares and all gives a weird feeling
<civodul>never heard about Triton before
<civodul>i guess it relies on the proprietary nvidia stuff anyway
<rekado>yeah, all that market share crap makes my eyes glaze over
<rekado>I just see what I want to see: first mover advantage has been squandered by both Google and NVIDIA
<rekado>Triton compiles to PTX instructions, so it’s the lower level that’s also targeted by nvcc for CUDA.
<rekado>if it’s true that CUDA and Tensorflow loose mind share that’s a good thing in my opinion.
<rekado>*lose
<rekado>the world of impenetrable huge frameworks with billions in investment sure is strange, though
<civodul>yeah
<civodul>i've been hoping for 10y to see CUDA fade away
<civodul>i thought we were getting there with Xeon Phi
<civodul>(funny in hindsight)
<civodul>then there was OpenCL, Vulkan, now AMD
<civodul>let's believe in the tenth-mover advantage :-)
<rekado>CUDA got one thing right: it’s easy to install and use
<civodul>is it?
<rekado>to this day I have no mental model of all the moving parts in opencl or rocm
<civodul>sysadmins at work seem to never be quite sure they got it right
<civodul>ah true
<rekado>I *wanted* us to use OpenCL back then but just couldn’t figure out how to even get started
<civodul>yeah so CUDA is easy in the sense that it's a complete homogeneous stack
<civodul>yeah
<rekado>and the popularity of tensorflow meant that nobody would use it seriously anyway
<rekado>I’m happy that tensorflow is no longer the only “serious” framework in use
<rekado>in our group everyone has moved on to pytorch
<civodul>i'm happy if that lets us forget about Bazel :-)
<rekado>yes!
<rekado>pytorch certainly wasn’t easy to package — but I’ll take it over bootstrapping all that java stuff for bazel and *then* figure out how to actually build a bazel build system…
<civodul>oh yes, definitely
<civodul>piece of cake in comparison
<civodul>but yeah, still in the "impenetrable framework" space, as you wrote
<rekado>but overall I have mixed feelings about the current iteration of AI enthusiasm
<rekado>it’s so far outside the realm of free software
<rekado>expensive hardware —> cloud platforms
<civodul>that's the thing, it makes me anxious rather than enthusiastic
<rekado>lots of money –> big corporations steering the stack
<civodul>yeah
<rekado>it’s revolting
<civodul>plus an energy drain, with some applications being questionable
<civodul>i hate it when machine learning is used in situations where an analytical solution could be worked on
<rekado>re energy drain: it annoys me that this massive computing problem is set up within the adversarial framework of competition
<rekado>we could have had massively distributed collaborative computing
<civodul>most uses are about making money one way or another anyway no?
<rekado>I’m not very familiar with all the use cases
<rekado>I know of the usual surveillance suspects, but what I’m most familiar with is bio research.
<rekado>e.g. protein folding prediction
<civodul>i suspect scientific applications don't weigh much in term of resource usage compared to the various surveillance applications
<civodul>imagine the ChatGPT budget for a single day :-)
<rekado>yeah, that’s insane
<rekado> https://twitter.com/sama/status/1599671496636780546
<rekado>“average is probably single-digits cents per chat; trying to figure out more precisely and also how we can optimize it”
<rekado>and recent news are that Microsoft now “donates” Azure GPU time (with the implication that OpenAI’s ChatGPT is now essentially locked into an Azure dependency)
<civodul>heh
<civodul>it's also competing a bit with their own AutoPilot thing
<civodul>er, Copilot?