• AdmiralShat@programming.dev
    link
    fedilink
    English
    arrow-up
    103
    arrow-down
    4
    ·
    edit-2
    1 year ago

    If you don’t add comments, even rudimentary ones, or you don’t use a naming convention that accurately describes the variables or the functions, you’re a bad programmer. It doesn’t matter if you know what it does now, just wait until you need to know what it does in 6 months and you have to stop what you’re doing an decipher it.

    • A_Porcupine@lemmy.world
      link
      fedilink
      arrow-up
      37
      arrow-down
      1
      ·
      1 year ago

      However, engineers who rely solely on comments to explain their code, are bad at writing readable code.

    • fkn@lemmy.world
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      1 year ago

      Self documenting code is infinitely more valuable than comments because then code spreads with it’s use, whereas the comments stay behind.

      I got roasted at my company when I first joined because my naming conventions are a little extra. That lasted for about 2 months before people started to see the difference in legibility as the code started to change.

      One of the things I tell my juniors is, “this isn’t the 80s. There isn’t an 80 character line limit. The computer doesn’t benefit from your short variable names. I should be able to read most lines of code as a single non-compound sentence in English with only minor tweaks and the English sentence should be what is happening in most of those lines of code.”

      • tatterdemalion@programming.dev
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        1 year ago

        80 character limit is helpful though when you need to have many files open at a time. Maybe 100 is more reasonable. Fighting indentation is important too.

        • fkn@lemmy.world
          link
          fedilink
          arrow-up
          2
          arrow-down
          2
          ·
          1 year ago

          I, too, remember the days before ultra high definition ultra wide monitors.

          I thought this argument was bogus in the 90s on a 21" CRT and the argument has gotten even less valid since then. There are so many solutions to these problems that increase productivity for paltry sums of money it’s insane to me that companies don’t immediately purchase these for all developers.

          • tatterdemalion@programming.dev
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            You have a point, devs should be using multiple large monitors. I will often need to have 3-4 files open at once, plus some browser windows. Having some limit on line length helps with this and for fighting code complexity.

            • fkn@lemmy.world
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              1 year ago

              The most important thing is comprehension. If something is too long and the length makes it less readable then it is too long.

              But if having 3-4 files open at the same time makes it harder for you to comprehend a single file because you can’t get the full picture, that’s on you.

          • icesentry
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            1 year ago

            I have a massive ultrawide and I still 100% believe in line limits. Long lines are harder to read in general but even with a limit of 100 I frequently have 3 files opened next to each other and I can’t read entire lines easily. Line limits just aren’t about the size of the monitor and I can’t believe people still say that.

            • ZpAz@lemm.ee
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              The best code has very little comments because the naming conventions should explain what it does and individual functions should do one thing.

              Lines should not be too long, but any IDE can do soft wrapping anyway. So it’s kind of a moot point.

            • fkn@lemmy.world
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              1 year ago

              I understand the concern, but readability and comprehension are way more important than line length. If the length impairs readability, it’s too long. Explicitly limits are terrible. Guidelines, fine.

              Ultimately, you do you. I still think your crazy and I think your argument is poor.

              • icesentry
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                Yes a strict 80 character limit would be bad but that’s why modern formatters aren’t strict and default to 90-100.

                I’ve pretty much never seen code that would have been more readable had the lines been longer than that.

                My main argument is still that shorter lines are more readable. I just think it’s a bullshit argument to say that long lines are fine because large monitors exists. I don’t see how that makes me crazy.

                • fkn@lemmy.world
                  link
                  fedilink
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  1 year ago

                  See, I think length limits and readability are sometimes at odds. To say that you 100% believe in length limits means that you would prefer the length limit over a readable line of code in those situations.

                  I agree that shorter lines are often more readable. I also think artificial limits on length are crazy. Guidelines, fine. Verbosity for the sake of verbosity isn’t valuable… But to say never is a huge stretch. There are always those weird edge cases that everyone hates.

      • MajorHavoc@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        There’s no such thing as self documenting code, unless every method and variable name has the word “because” in it.

        Anyone can read what the code does. The comments are there to answer why it does what it does the way it does.

        Why is invariably lost to time, if it’s not committed to a comment here and there.

        • fkn@lemmy.world
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          1 year ago

          This is a pretty ridiculous position to take but if you believe it then I’m glad you write the comments you do.

          There is an argument that commenting on the lack of expected code is valuable for this reason, but it certainly isn’t true in all situations.

          • MajorHavoc@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            We can agree on “not all situations”. Often the answer to “why did we do it this way?” is blazingly obvious, and no one wants a comment.

            But we all know that sometimes the “why” isn’t obnoxious at all.

            As far as I can tell, developers who do believe in self-documenting code either haven’t learned the power of “why?”, or they have a secret technique for encoding “why?” into their code structure.

            If it’s the second thing, I would be delighted to be brought in on it. (No sarcasm. Maybe I’ve missed a trick here.)

            • fkn@lemmy.world
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              1 year ago

              I’ll answer in a couple of different ways.

              1. If I am writing library code my why is you have an end use and I don’t care why you use it and you don’t care why I wrote it. You only care about what my code does so you can achieve your why.

              2. If we are working on the same code we have different whys but the same what. Then your comment as to why isn’t the same as mine which makes the comment incorrect.

              3. We are looking at a piece of code and you want to know how it works, because the stated what is wrong (bugs). This might be the “why” you are looking for, but I call this a “how”. This is the case where self documenting code is most important. Code should tell a second programmer how the code achieves the what without needing an additional set of verbose comments. The great thing about code is that it is literally the instructions on the how. The problem is conveying the how to other programmers.

              There are three kinds of how: self evident, complex how’s requiring multiple levels of abstraction and lots of code and complex short how’s that are not apparent.

              The third is where most people get into trouble. Almost all of these cases of complexity can be solved with only a single layer of abstraction and achieve easily readable self documenting code. The problem for many cases is that they start as a one off and people are lousy at putting in the work on a one-off solution. Sometimes the added work of abstraction, and building a performant abstraction, makes a small task a large one. In these cases comments can make sense.

              Sometimes these short, complex how’s require specialists. Database queries, performant perl/functional queries, algorithmic operations, complex compile time optimized templates (or other language specific optimizations) and the like are some of the most common examples of these. This category of problem benefits most from a well defined interface with examples for use (which might be comments). The “how” of these are not as valuable for the average developer and often require specialist knowledge regardless of comments for understanding how they work. In these cases what they do is far more valuable than how or why.

              • MajorHavoc@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                You’ve given a lot of consideration to modern recently created code. But the best modern recent code goes on to become someone’s legacy nightmare. (The most fit and correct code survives long after anyone really wishes it would.)

                In high quality legacy nightmare code “why” is lost, unless someone wrote it down.

                I’ve been on both sides of that mystery. “Why didn’t they just do X?”

                • Sometimes it was because X didn’t exist yet, or wasn’t matute enough.
                • Sometimes it was because X is fundamentally the wrong solution, in a very subtle way.

                There’s two ways to know the difference:

                  1. Painful trial and error.
                  1. A comment (or document) answering “why”.

                I prefer the second way, but I happily charge more for the first way.

    • tatterdemalion@programming.dev
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      1 year ago

      This is why code review exists. Writer’s can’t always see what’s wrong with their work, because they have the bias of knowing what was intended. You need a reader to see it with fresh eyes and tell you what parts are confusing.

      That’s not to say you shouldn’t try to make it readable in the first place. But reviewing and reading other people’s code is how you get better.

    • rolaulten@startrek.website
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Let’s take this one step further. I should be able to get the core ideas in your code by comments and cs 101 level coding (eg basic data structures, loops, and if/then).