• ☆ Yσɠƚԋσʂ ☆
    link
    fedilink
    10
    edit-2
    2 years ago

    I’d argue there’s a historical aspect to this as well. Back when personal computers became available these were slow machines with limited resources. Originally they were programmed by writing assembly by hand, and C was effectively a way to write portable code that was very close to assembly but could be compiled against different instruction sets. Meanwhile, the kinds of software people were writing was relatively small and often maintained by a single developer. Having an imperative language that allows you to squeeze as much as you can out of your hardware made a lot of sense in this scenario.

    A whole generation of programmers learned to code in this environment, and then they went on to design languages and teach others. Most mainstream languages are derived from C, and bear a lot of similarity to it both syntactically as well as semantically.

    And of course we now have mountains of reusable code written in imperative languages, documentation, application platforms, and so on. So, there is a lot of incentive to use a language that’s popular.

    Of course, today the problems we’re solving are different from ones we solved forty years ago. Raw performance is often not the most important consideration for code. Being able to write code that’s easier to reason about, maintain, and to work on in teams are often a much more important considerations.

    The fact that we’re seeing functional programming being adapted shows that it does solve real problems for people that can’t be easily addressed using the imperative style.

    My experience is that imperative programs quickly become difficult to reason about as they grow. This is especially true with OO style programming. Each object is basically a state machine, and your program is a graph of these interdependent state machines. It’s very difficult to tell what any particular object is doing without knowing the state of the entire program.

    On the other hand, FP style focuses on using pure functions that can be reasoned about independently with the state being passed around explicitly between them. This style of programming makes it much easier to create independent components that can be reasoned about in isolation.

    I find that a good general approach is to push state to the edges of the application while keeping business logic pure. Clean architecture approach is a good example of this pattern.

  • @[email protected]
    link
    fedilink
    6
    edit-2
    2 years ago

    I think it’s mainly because object oriented is easier to learn. For the same reason weakly typed and dynamicly typed languages are so common cough JavaScript cough despite not being as good as strongly typed and statically typed. Like, we’ve had more than enough time for the paradigm to shift, and many paradigms in computing have shifted, so to say it’s merely inertia or luck in the early days is incomplete.

    • ☆ Yσɠƚԋσʂ ☆
      link
      fedilink
      42 years ago

      In my experience it’s not really about OO being easier to learn, in fact I’d argue it’s much harder to use correctly. It’s really just the fact that people are more likely to learn it first and then once they learn it they stick with it.

      What makes learning FP difficult if you’ve already internalized imperative style is that patterns aren’t easily transferable from one to the other. And people confuse having to learn new patterns for writing code from ground up with FP being more difficult.

      My anecdotal experience with this is teaching Clojure to coop students my team hired in the past. We found that students from first and second years could pick up Clojure easily while students third and fourth years often found it more challenging. Main factor was that students earlier in their studies didn’t have a lot of preconceptions about how code should be written where students from later years had to unlearn things.

      I find ego plays a role here as well. People who see themselves as being experienced don’t like feeling like beginners, and that’s what it feels like when you’re learning something different from what you’re used to.

      I’m less convinced regarding static typing though. I find the key feature that makes code maintainable is immutability because it allows doing local reasoning about the code. Any large program can, and in my opinion, should be broken down into small independent components. I don’t find dynamic typing to be much of an issue with this approach and it’s always possible to add things like specs and schemas at component boundaries for ensuring correctness.

      The main downside that I’ve found with static typing is that it limits you to a set of statements that can be expressed using a particular type system. And you either have a restrictive language with a simple type system or a flexible one with a complex type system that introduces its own mental overhead.

    • @[email protected]
      link
      fedilink
      32 years ago

      OOP as is popular known is basically “fancier” imperative programming; Turing > Von Neumann > Imperative > OOP

      Lisp is basically were we got dynamic type, and Lisp is the earliest influential example of functional programming.

  • @[email protected]
    link
    fedilink
    12 years ago

    Because Turing created an actual computer for his computational model, while Alonzo did not for Lambda Calculus. So people adopted Turing’s model for the early digital computers and the programming languages which is pervasive to this day

    • @[email protected]
      link
      fedilink
      62 years ago

      The machine that Turing made wasn’t exactly influential. EDVAC (Von Neumann’s machine) overshadowed it quite dramatically (to the point most people in the field don’t even know that ACE exists, but know the phrase “Von Neumann Machine” instantly).

      Turing’s main influence on computer science was theoretical, not in implementation, despite him technically being “first” with a stored-program computer.

      • @[email protected]
        link
        fedilink
        2
        edit-2
        2 years ago

        Of course, but Turing influenced Von Neumann. Turing’s model was probably more intuitive than Church’s as well. So the timeline’s roughly like this: Turing > Von Neumann > Imperative dominance

        • @[email protected]
          link
          fedilink
          22 years ago

          I … think that’s exactly what I said. Turing’s influence was mainly theoretical, not practical. That Von Neumann was influenced by (and even plagiarized to some extent) Turing is indisputable, but Turing didn’t “[create] an actual computer for his computational model” in any way that was actually influential.

          Tragically.

          Because EDVAC was kind of lame compared to even Pilot ACE.

          • @[email protected]
            link
            fedilink
            42 years ago

            What I meant (but failed to do so) was that Turing provided an actual model of computing engine so that it was more straightforward to implement it, while Church’s did not. Besides pure lambda calculus was pretty convoluted even for representing things like a natural number. Implementation of Church’s work would only be more explored in the 60s with McCarthy et al, a 20 year gap that defines computing to this day.

            • @[email protected]
              link
              fedilink
              12 years ago

              Fair enough. Turing’s model was a more comprehensible machine from an implementation standpoint.