After chatting with some of you on this forum and seeing that we all are on Lemmy rather than Reddit, I think it would be a good idea for us to have some study groups to improve our technological literacy and competency.

During my time on Lemmy, I’ve been able to increase my digital literacy and overall knowledge surrounding my system. I’ve loved the nearly endless rabbit holes Wikipedia has pulled me into, as well as the resulting happiness that comes from finally fixing a broken Linux system or piece of technology.

But what exactly does technological literacy encompass, one might ask? I’d like to illustrate via anecdote. When I first got into Linux, I was told to “Get a terminal emulator to SSH into the HPC so that you can run computational jobs”. To most of you this sentence is completely normal, but to my unconditioned mind, I felt like a big bright light was flashed before my eyes while my PI spoke martian to me. After the initial disorientation, I downloaded what I thought was my only option for a terminal emulator (MobaXTerm), and found myself sitting in front of a pitch black terminal screen with a blinking prompt. Not knowing what a host was, how to manage a network, any Linux commands (coreutil, never heard of her…), or really do anything past opening up WoW and Google Docs. The only things more advanced than the plug and play Google/Microsoft software solutions I’d use, was my botched LaTeX setup. I used it to typeset math equations for my students, homework, and lab reports from how much faster I could type in the TeX format than click on every Greek letter/symbol I needed. Overall, it really messed with my ability to do the research I was tasked to do. I was supposed to learn how to use Vim as my IDE when the only IDE I had ever worked in was Spyder from Anaconda! VSCodium, CodeBlocks, Emacs, etc, I did not know that any of these existed.

Needless to say, this was extremely discouraging to be thrown head first into a difficult scenario with very little assistance whilst trying to juggle coursework and outside responsibilities. Humble beginnings reinforced in me that if I experimented with my computer and messed up on the OS side, that I’d brick my hardware and have some variation of Homer Simpson holding up the “So you Broke the Family Computer” book.

I’m sure that we all come from varying origins of computer literacy, which IthinkI’ve proposed a couple of possible areas of study, that we could set up in small or large groups depending on interest. The frequency, literature references (textbooks, white papers, blogs, forums, etc.), and the project goal (could be concrete or abstract) should be drawn up and worked towards to keep the topic focused. I’ve come up with a couple of fields for us to start with, feel free to add to the list or modify what I’ve written.

  1. Cryptography with a rigorous mathematical foundation applied to both classical and quantum computing paradigms (AES, RSA, Hash functions deeper than just the surface, information theory (We love our boy Claude Shannon), Cryptographic primitives, Shor’s Algorithm, etc.)
  2. A hardware agnostic study of firmware (What are some unifying principles about firmware that can empower the user to understand why certain aspects of the device are not functioning)
  3. Hardware architectures (GPU, NPU, TPU, CPU, RAM, DIMM)
  4. Form factors (How geometry can impose certain design decisions, and so forth
  5. Fundamentals from First Principles, i.e condensed matter physics theories to understand the classical computing systems. The group can also choose to segwey into topological states of matter (Dirac fermions, Weyl semimetals, Mott insulators, and a myriad of other cool matter states that aren’t really discussed outside of physics / graduate engineering classes) Qubits (Bloch sphere representations) and loads of other things that I’m sure exist but am unaware of.
  6. LLM Inference technology and how it can be applied to case law, accounting, stocks, and various other fields where the solution to the problem lay somewhere in an encoded technical language.

I’d like to begin the discussion with this as our starting framework, does anyone have any interest in the topics listed above or suggestions for other subjects? How should we manage these groups? Should we add some chats to the Matrix instance?

  • bionicjoey
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    9 months ago

    I’m AuDHD and I’m probably the same type of guy who gave you that first task. I run an HPC for scientists to do analysis (although in my case it’s mostly bioinformatics)

    • gronjo45@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 months ago

      That’s so cool that you work on HPC systems. Do you ever have to work on the hardware side of things if some piece of the system suddenly stops functioning?

      I watched a video awhile back on Ruler SSDs (U.1 I think?) So I imagine there’s some modularity in the systems? Would you be interested in being part of an HPC group for those trying to learn more about them?

      • bionicjoey
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        I’m more on the sysadmin and systems engineering side of things. The guy who was in my position before me was the one who built the system we run currently.

        I’ve had to do some system maintenance before, but I didn’t build 'em, and I didn’t make any choices about what parts to put in 'em. If something in the datacentre breaks, I call one of the datacentre guys and have them check it out physically. I work for a federal government, so it’s an absolutely huge organization with lots of teams to do every different job. The datacentre is actually physically not in the same facility as my office, and either way, I work from home.

        Most of my job nowadays is actually more providing expert advice on HPC and scientific computing for the upper management of my organisation. There’s a guy on my team who services support tickets from the scientists, and I help him out if he gets stuck. I also help out a lot with documentation and training to help onboard scientists who’ve never used computing like this before in order to get skilled up and start doing science.

        • gronjo45@lemm.eeOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 months ago

          That sounds like a lot of responsibility for the job, but I’m sure it’s fun to work on such an impressive system!

          I’ve only seen videos/pictures of what the HPC systems actually look like in real life, so it’s definitely nice to have someone fix it when something goes awry. The HPCs that I worked on were for my university and a federal government national laboratory, but sadly my mentor was very unresponsive and unavailable to assist me. The dynamics of my life were all over the place at the time, so I didn’t get as much out of it as I would have liked. We wanted to run some molecular dynamics simulations to discover new catalysts and molecular sieves, so it was a really interesting project to combine with DFT.

          To be considered an HPC, does one have to be in the PetaFLOPs regime of computation? I recall that some of the Radeon and NVIDIA cards are able to get up to TeraFLOPs which is already mind-blowing given that I still have a Turing architecture card in my system. Do you work in protein folding / soft matter-like things?

          • bionicjoey
            link
            fedilink
            English
            arrow-up
            3
            ·
            9 months ago

            The definition of “HPC” is actually a constant point of contention in my job. Most scientists and indeed most management people I work with use the term “HPC” to describe any multi-tenant science computing infrastructure that uses a Job Scheduler, regardless of how HP the C actually is. Most of the time I actually avoid the term, since it’s got this ambiguity.

            The system I work on takes up three full racks in the datacentre, but in the grand scheme of things it’s not that big of an HPC, only 20 compute nodes of around 64 cores and 1-3 TB of RAM.

            Most of the work my clients are doing is in the realm of bioinformatics for research purposes, so things like genomics, metagenomics, etc. Not so much something like protein folding. Mostly it’s lots of big complicated data science pipelines where the inputs are things like FASTA files and reference data. Lots of machine learning and statistical models trying to make connections between plants, animals, bacteria, fungi, viruses, etc. Notably, the science my organisation works on has very little to do with Human biology specifically.

            • gronjo45@lemm.eeOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              9 months ago

              Slowly been getting back to the comments. Any programs you’d recommend to play around with metagenomics? I’m definitely interested in learning how to use the FASTA format for jobs. Are you still interested in participating in one of the study groups?

              • bionicjoey
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                9 months ago

                I’m not a bioinformatician, so I can’t recommend any utilities. I’m also not really on Matrix so probably no to the study group idea, but I’m happy to answer questions in a thread like this. (Or maybe in a dedicated Lemmy community for science computing)

                • gronjo45@lemm.eeOP
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  9 months ago

                  I think I’ll make a page for scientific computing. Let’s see if we can get others interested as well. How do I get an instance up?