Has programming lost its way? Part Two

Whether you are an engineer, a designer, a programmer or of any other trade that requires skill, the one lesson many have been taught early on is "keep it simple". Sadly this lesson is often lost in the name of progress, especially when it comes to programming.

Let me give you one example. I know this won't go over well with most programmers, but it needs to be said. Languages like C++ simply are not simple by design. Object-oriented programming, while possibly having some value for specific tasks, does not make programming simpler. I would venture to say that the so-called benefits of object-oriented programming has more to do with the feature set of the higher level objects that some languages provide, more so than it being accomplished using OOP.

Advertisement

Coding environments are no longer code editors, but they are studio environments, which do little in my opinion to make programming easier. Sure, most programmers will say they are more productive now than in the past using their favorite studio environment, but a valid question to ask: Are programmers actually writing better code today, writing code faster or are we simply looking for the computer to do all the work, rather than us do the work?

I enjoy writing code. I enjoy looking for solutions to problems and being creative in the code I write. I have found though that once some developers get away from the drag-and-drop environment, they may find themselves at a loss of how to accomplish a task. Programming tools and languages today, supposely are designed to improve productivity in software development, but in the end, much depends upon the programmer themselves.

The human mind is a better debugger than any man-made application can be. Developers of programming tools can't always be trying to solve their customers' problems by making the tools do all the work. Programmers need to learn how to code, how to debug. Coding is a skill. Debugging is a skill. When a programming environment becomes so complex that one has to read a dozen books on how to use them, then something is definitely wrong. Is that ones idea of ease of use?

Code Readability

Any programmer worth his or her salt, should be a learner. You can't know everything, but when you currently lack the knowledge necessary to accomplish a task, you must be willing to learn from other peoples' code. How quickly one grasps what code is actually doing, depends mostly on how readable the code is. Code readability in my opinion is the real test of whether a programmer has really learned the lesson of "keep it simple".

Sadly, most code I come across today is neither simple nor easily readable. I am not a C programmer (I use BASIC), but I have to examine a lot of C code to find solutions to problems, since the majority of Windows API code examples are in either C or C++. Because I am an API programmer (WIN32) I research a lot of code on the Web (ie. MSDN, Code Project, etc.) to see how to properly access Windows API's. While I don't know C (nor do I personally  like it), I usually don't have too much problem making sense of good old fashioned non-OOP C code. Now when it comes to C++, half the time I just can't even make sense of what some code is doing.

Procedural Compared to Object-Oriented Code

I have found that procedural code is often far more readable than OOP code is. While I prefer the naturalness of BASIC over say C, even with a language like C, I can see C (procedural coded) as being far easier to read than C++ is. I personally find procedural style code is more likely to be "simple" and easy to read, than OOP code is.

Programming takes time (time is money as they say) and I could write procedural code much faster than OOP code, with much less preplanning involved. What is needed though, is more modular design, which improves code readability and makes debugging easier.

Code does not need to be object-oriented to be modular. Procedural code can and should be modular in design (code reusability). Even well written Assembler, with proper modular design, can be easy to read and to code. There are some valuable points made by Richard Mansfield in his "Has OOP failed?" white paper and while I would not say that all OOP is bad, I do believe it is important to not over emphasize OOP as the solution to all programming problems. OOP has not produced simpler code over all, in my opinion. It may offer some benefits in User Interface coding, but even there it is not necessarily superior.

IntelliSense or Common Sense?

While I won't go as far as to say that IntelliSense is wrong, I do believe that modern coding environments do say something about the state of programming. For the average developer, if you took away the IntelliSense feature of the code editors, he or she likely would be overwhelmed. Why? Because of the object craze, everything has to be an object and so programming languages are inundated with a huge command set, which has become more like a New York phone book (over exaggerated for emphasis) than a programming language command set.

Downside of the Bleeding Edge!

This is one thing that concerns me because as an individual software developer I find it challenging to try to keep up with the all changes in technology. I don't have the resources (or time and strength) to keep being retrained. I use what I know and build upon that over time. As I research all the new things coming along in the tech world, the one hat stands out: Developers' obsessive thirst for new things.

New stuff isn't bad, but if we always keep developing software for the bleeding edge of technology, then we lose the benefit of all our hard work in the past. We all have heard the saying "don't reinvent the wheel" and it makes sense to a developer, but sadly because everyone keeps developing for the bleeding edge, we are forced to reinvent the wheel over and over again and sometimes the wheel gets worse in each iteration. This wastes time, money and resources.  It is possible to continue to use development methods that to some may seem dated, while progressively adding new technologies at a more reasonable pace.

To illustrate this, consider automobiles. They have come a long way since their invention, but amazingly there are some core designs in cars which have not changed all that much in 30 years. While many mechanics today use computers to diagnose computer problems, you may be surprised that some tools mechanics use have been around for decades and are still useful today. When I had to diagnose a severe problem with my car, the most useful tool was actually the good old vacuum guage. Sure I could plug in in the diagnostic computer (OBDII), but the vacuum gauge actually told me more than the computer did. I ended up rebuilding the entire engine in the end, but the point is that an old tool was still useful and even more helpful than a new one.

If programming languages keep changing year after year, but fail to maintain backward compatibility, then years of software development are literally thrown out the door. That wastes money, wastes time and wastes valuable experience and resources. We end up "reinventing the wheel" over and over again, but have we actually improved software that much?

If you wonder why many businesses are still using Windows XP, a more than 10 year old operating system, perhaps the old adage is true: "it if isn't broken, don't fix it". That is a valid question to ask, whether some of the software today is really innovation or simply just an obsession with the bleeding edge and the desire for something different, but not necessarily better.

For example, I live in a rural area and in just a few years I have gone from dial-up Internet, to 768Kbps high-speed and now 1.5Mbps. Now I can download stuff a lot faster, which is nice, but my overall experience with websites has not improved as dramatically as I had hoped. Website developers assume that more people have high-speed connections, so they get careless about how fast pages will load. Improved broadband speeds are lost because of careless web design. A similiar thing has happened with software. As computers get faster and faster, the experience does not necessarily get better and better.

An Interesting Experiment

An interesting experiment to try is this: If you can find an old copy of Windows 95/98 sitting around and some old CPU intensive application that dates back to Windows 95/98 (ie. 3D Modeler designer), try installing the software on a computer that is far more powerful than what was available when the software was available. For example, install it on a computer with a 1 ghz CPU with 1GB memory. Most of the computers when Windows 95/98 were around were likely less than 100MHz in speed. You may be surprised to see how "fast and fluid" the experience is using the software. Now imagine running the software on, say, a top-of-the-line, multi-core CPU we have today.

I really don't get the impression were are producing all that much better software than we did in the past.  Coding environments definitely lack simplicity. Software is slower than it needs to be. Software is likely still just as riddled with bugs as it was 15 years ago. Software development is likely no more productive than it was 15 years ago. User experience is not necessarily significantly improved compared to the past.

So, I ask. Again. Has programming lost its way? I will leave that up to you to answer.

Photo Credit: Gunnar Pippel/Shutterstock

Chris Boss is an advanced Windows API programmer and developer of 10 year-old EZGUI, which is now version 5. He owns The Computer Workshop, which opened for businesses in the late 1980s. He originally developed custom software for local businesses. Now he develops programming tools for use with the PowerBasic compiler.

55 Responses to Has programming lost its way? Part Two

  1. StockportJambo says:

    Gosh, I'd hardly know where to begin with this.

    OK... top to bottom. C++ is a simple language. Why? It has very few keywords, and very few operators. Certainly when you put it next to a complex language, like say, Basic...

    I agree that modern debuggers have made programmers "lazy". When I first started programming (games on the Commodore 64 in 6502 assembler) the first you knew of a problem was when your computer completely locked up. I could solve those problems then... I'm not so sure I could solve them now. Intellisense similarly - in the old days, we had to go looking things up in *books*, cross-referencing things manually, and we didn't have Google or Stack Overflow to help us.

    The OOP stuff... as I mentioned in your last article's comments, you don't have the mindset so you can't understand it. That I am afraid, is entirely your failing, and not one of the methodology. Deal with it, or fix it. Don't waste time moaning about it.

    I'm not sure where you're coming from when you imply that *languages* introduce breaking changes every year. That just doesn't happen. A C# 1.0 program written using Visual Studio 2002 will compile perfectly well in VS 2012, if the references are brought up to date.

    And as for the rest - I agree 100%, but bloatware has many reasons. Windows itself is to blame for much of it, that, and the hardware fragmentation of the PC it has to support. Similarly, people are involved in planning software that shouldn't be. Thirdly, I think there's an argument to say that users are more sophisticated now than they were 15 years ago. The standard has been raised. What seemed amazing 15 years ago wouldn't get a look-in now, because incrementally, modern software is just so much better.

    • chrisboss says:

      Quote "The OOP stuff... as I mentioned in your last article's comments, you don't have the mindset so you can't understand it. "

      Not the lack of the mindset to understand it, but the preference to avoid it when possible. I read about one study where it stated 75% of IT projects fail, so obviously a lot of software is not coming out as planned. Yes, a lot of software does some amazing stuff today, but that does not mean that software written 15 years ago did not do the same for its time.  I find that I am not alone in my views on OOP. Richard Mansfield's white paper says a lot and I think he knows a lot more that me.

      I think productively says a lot. I am a 100% procedural style programmer, using an advanced Basic, which can produce fast and small apps on par with C. I write tools for programmers, not simply end user stuff. One customer once asked me whether I had a team of programmers working on my software, since my tools had helped his company produce an amazing number of apps in a short time. I had to tell the customer, that its just "little ole me" by myself. No team of programmers. The point is, that OOP has not shown itself to be the panacea of programming that many thought it would be. OOP can be useful, but it is only one small tool among many. It is not the total solution.

      • StockportJambo says:

        ... meanwhile, back on Planet Earth, regular programmers don't have the "luxury" of being "little ole me by myself".

        We work in teams, because companies that can pay a regular salary that pays the bills & feeds the kids like it that way, all for good, sound, business reasons. If Fred is sick, or Moira has a baby, someone else can pick up their work easily.

        They also like that we use modern object orientated development systems, instead of trying to divide spaghetti using procedural languages. There is a reason why this trend has occurred, and it's not just change for the sake of it.

        You are telling everyone what works for you, and you alone. Hooray - fair play & more power to you. But most of us don't work that way, therefore we need a more sophisticated approach than the "hack that just works... doesn't matter because I know about it and can work around it..." mentality that you get when you own a project completely by yourself.

        How would you sub-divide a program written in BASIC into discrete work packages that can be given to team members? How would you provide interfaces so that bottlenecks don't happen? These are real world questions that you didn't answer last time, so I hope you can now. Saying "it can" doesn't make it so. It can't. That's why C#, Team Foundation Server, Visual Studio, Sharepoint etc exist.

        Prove otherwise... offer some real world examples of your own instead of just re-enforcing your point with "it can". 

        I've spent the best part of 30 years of my life programming computers, using all kinds of different languages on different platforms... sometimes on my own, but more often in a team. I've seen both sides, and in my experience, OOP wins hands down in the real world. It's no panacea, but it's by far the best we have at the moment.

        The panacea lies further ahead over the horizon. It doesn't lie 100 miles in the direction we came from.

      • Adas Weber says:

        IT projects don't fail because of OOP. They fail due to poor decisions made way before any coding has begun. They fail because someone has not understood the client's requirements, or because the resulting specification given to the developers is unclear.

        OOP allows you to create an abstract representation of the real-world problem you are solving. If the abstract solution doesn't correctly represent the real-world problem, then the project is very likely to fail. But that's not the fault of OOP.

  2. smist08 says:

    There are a plethora of programming languages because there are a plethora of needs. C/C++ are really good for some low level things. Some applications require extreme parallel programming, some extreme graphics performance. There are modern scripting languages that to me are far easier and more powerful than Basic. Some like Ruby, some like Python. Choose what works best for you.

    Sure there are some bloated and sluggish programs out there. There always have been. But could you run a modern state of the art video game on your old Windows 95? Did you have real time photo-realistic graphics. My phone can perform real-time voice to text, could Windows 95 do that? Look at the Intuit mobile app that scans a W2 converts it to a tax return and files it, could Windows 95 do that?

    I'm not interested in the applications that Windows 95 performed. Sure not all new things work out, but look around you, and there  are some stunningly amazing applications out there.

  3. Jeremy Blosser says:

    Okay -- I gotta ask this question.
    Is this article (and the prior one) some sort of horrible advertising for your company?

    How could anyone in their right mind even think this is a valid argument? You are knocking Intellisense? Really!? I mean, I could go and look up the Win32 API calls like you apparently do, or I could use Intellisense and not have to look up anything.

    And do you realize that modern OOP languages like C#, Java, C++ allow for procedural based coding and even functional based coding right? So you could, I dunno, maybe do something crazy like use the right programming paradigm for the job at hand?

    I think StockportJambo has it right. You haven't developed an object oriented mindset so you don't understand it.

    Instead of trying to understand and grasp it and see how it could help you, you seem to have run in the opposite direction.

    • Robert Johnson says:

      Jeremy has a good point about intellisense. When I started writing code, I learned it without using intellisense. But when I started using development programs that included intellisense, I found that it helped me to complete my tasks much quicker. Intellisense may have its negatives, but for the most part it helps to save time that I would be spending pouring through lots of APIs.

  4. chrisboss says:

    I never said OOP has no purpose. It does. But todays programming languages have literally been taken over by OOP to the point that programming has IMO become more complex, not easier. Computers have become more powerful, yet software (and operating systems) run more slowly, not faster. Most programmers develop on bleeding edge computers, so they don't see how their software really runs on the computers everyone else uses. The Tablet PC's though have brought this to light because, it is not so easy to pack as much power into a tablet. The Atom CPU is quite powerful, but yet it is easily maligned because it does not have the power most programmers are use to. As far as this article being an advertisement for my own business, I tried hard to not mention it in the article at all. I only mentioned that I use the Windows API (WIN32) and that I use Basic. I provided at least one external source with a viewpoint about OOP, from a respected author and programmer, so I am not alone with the viewpoint that OOP is not the panacea many would like all programmers to believe. As far as working with a team, OOP is not required to share code. Libraries can be written by team memebers and shared. OOP can be used when it is well suited to a task (some tasks do benefit from OOP) and other component models can be used (even non-OOP based). Besides, if no one has the courage to suggest something different than the mainstream, then what has happened to programming ? The real test is the quality of software developed, the speed at which it is developed and whether it does the tasks it was designed for. If it can be done with OOP fine. But if it can be done using other methods (like purely procedural) then what is the problem. Today, a lot of software projects fail, with software being buggy and the user experience is not always great for many. Why not consider alternate ways of developing software ?

    • StockportJambo says:

      Y'know, I've heard of this thing called a "steam engine". It's awesome. It's a completely alternative mode of rail transport...

      Nothing you say is convincing to anyone who has been on both sides of the fence (and there aren't really "sides", it's just about using the right tool for the job). Because you don't understand the benefits of specialisation, polymorphism, data hiding and other OOP concepts, you don't know how to use them properly. So therefore, they must be rubbish.

      Pah.

  5. chrisboss says:

    Quote:  "Prove otherwise... offer some real world examples of your own instead of just re-enforcing your point with "it can"

    Some don't like the idea of me mentioning my own software or work in my articles, so I tried to downplay it in this article. But I do have a customer base, which develop software for use all over the world, in many commercial settings. Two applications written by two different companies, using my tools, ended up at the "Oscars" (i think it was that one) (behind the scenes of course) one year. Large companies like Disney, may not realize they are using software package, created using my own tools. One company who writes software used by large oil companies for dealing with under water ROV's, uses my tools. Some of the software created using my tools is used in manufacturing environments. I design tools for programmers, so I have to have some understanding of what it takes to develop software. My primary GUI engine has the ability to run smoothly on Windows 95 to Windows 8. It dynamically polls the operating system to see what fetaures are available and deals with each OS accordingly. The point is that my software is being used in real world situations and by real developers in commercial settings. My viewpoints, while different than most are based on real experience. This does not mean everyone should program the way I do and there is room for a variety of programming methods. But to mention an alternate way of programming, such as procedural rather than using OOP, is too much for many programmers to swallow. I know that. Yet, it does work and well, so why not at least consider it ?

    • StockportJambo says:

      It's not that it's too much to swallow, it's just that most programmers who have been around a while have been there, done that, and bought the t-shirt.

      Time has moved on, and development methodologies have improved. Many of the wheels you are re-inventing above are already handled by frameworks such as .NET, so why bother?

      One thing I do know is that your users won't care. They just want it to work, and for you to be reactive to changing requirements.

      • Adas Weber says:

        @StockportJambo:disqus 

        Indeed, and being reactive to changing requirements is one of the big advantages of OOP. It allows large scale projects to be maintained in the future without breaking anything when changes are required.

    • chrisboss says:

      Here is an example of the ROV (Remote Underwater Vehicle) systems (control systems) where one of my customers software is being used, which was developed using the programming GUI engine which I created. At about 4:33 (4 minutes 33 seconds) into the video you will see a laptop running the system control software. That software was designed using my tools.

      • StockportJambo says:

        Nice. But why was it better to write procedurally rather than using OOP, other than the fact you don't understand OOP very well? That's what I'm getting at - I'm not looking for your programming resume! :)

  6. Dwedit says:

    Object Oriented Programming is nothing more than writing functions that take a hidden "this" pointer as the first parameter to the function.  That's all it is.  It has nothing to do with code bloat.

    • Ernesto says:

      Huh? From that perspective, all high level programing languages are nothing more than a little syntactic sugar. Yet, that has to do with code bloat. OOP gives you some "simple" features for code reuse and other things, that's all it is, yes, but that has a lot to do with code bloat. I am an OOP programmer, I write bloated code, and I know why I do it.

      • StockportJambo says:

        All high level languages are merely a human-readable macro form of what the machine really understands, which is 1s and 0s. Even assembler is just a translation of the opcodes that the CPU understands. No computer understands BASIC, or C# or even C.

        So yes, you can argue that they are syntactic sugar. If you're arguing that it is automatic code bloat, then you're either doing it wrong, or you're using a very in-efficient compiler. Dwedit is partially correct (that's how methods work and why they are different from procedures), but there's much more to OOP than that.

  7. Rankine Zero says:

    You seem to be confused as to the purpose of object oriented code.
    The purpose is to clean up and make it more readable and reusable, as well as making property accesses more easy to debug by using property access methods.

    Properly done object oriented code is a thousand times more readable to me than the spaghetti mess of procedural code.

  8. Really?

    Sorry but this argument / discussion is about 10 years late and totally out of touch. If you're still using BASIC, in which you have a vested interest, you should look at expanding your perspective a bit. 

    If you want everything to look like a grey box, be my guest, go ahead and stick with the WinAPI, but in 2012 app DESIGN has become as important as its development. In recognition of this the WinRT APi has moved the XAML "good looks" closer to the metal. 

    as for learning curves, there are fantastic, evolved languages that make this a complete non issue. Ruby for one makes 90% of your argument irrelevant, and is a breeze to learn. C# is elegant and powerful and a good reason why everyone who could dump Visual Basic for C#, did.

    • Adas Weber says:

      Dumping Visual Basic for C# makes no sense, because they are almost the same, ie VB.NET is the same as C#.NET, with the only significant difference being the syntax.

      If by VB you are referring to VB6 or earlier, then yes I agree that dumping VB6 and opting for either VB.NET or C#.NET is the best way forward, assuming porting is not too big a task.

      • Josh says:

        "VB.NET is the same as C#.NET, with the only significant difference being the syntax"
        Exactly. As you said, its a significant difference.

      • Adas Weber says:

        @yahoo-JCBAMZRATA44PKSTH7WE5EF3J4:disqus 

        Re: your comment below...

        The syntax may be different, but the capabilities are the same. This means that you won't be limited if you choose VB.NET over C#.NET, or vice versa.

        I program in both, but prefer VB.NET because it's easier to read and is not case sensitive. But that's just a personal preference.

      • StockportJambo says:

        That's because the two compile down to the same result - IL. You can reflect on any assembly using tools and see the same code represented in either VB.NET, C# or indeed any .NET language.

        The whole .NET paradigm gets rid of the whole "my language is better than yours because of X" crap.

        Just one of the benefits of modern development. 

  9. chrisboss says:

    Quote: "spaghetti mess of procedural code."

    That argument died a long time ago. Modern Basic's like I personally use, have modern constructs necessary for proper modular code. It can even do classes and OOP, if one requires it, but most of the time it is not. It has all the low level constructs one would ever want, such as pointers, inline assembler, overlaying arrays within any block of memory and one of the best string processing command sets I have ever seen. Even Herb Sutter, the C++ expert at Microsoft, basically said in his talk "Why C++?" that managed languages can not compare to C or C++ in performance (speed/size) and the Basic I use is on par with most any C compiler for execution speed and performance.

    But even with all the modern contructs in a language, procedural style code has many advantages and will IMO out perform OOP.

    • StockportJambo says:

      I think you're confused.

      Managed frameworks are slower than native code. That's not rocket science, but it's a trade-off some people choose to make. Readability, backwards compatability & ease of development vs raw speed.

      It has absolutely nothing whatsoever to do with whether the language you use is pure OOP or procedural.

      • StockportJambo is right. Unmanaged code will always be faster than managed code - this has nothing to do with OOP/procedural code. It's a choice, a sacrifice. But in the end managed code benefits far outweigh the problems we were stuck with in the unmanaged world.

  10. Steve says:

    "It may offer some benefits in User Interface coding, but even there it is not necessarily superior."  This sounds like someone whose introduction to OOP was VB 6.0 or earlier.  Microsoft made VB OOP-like, not OOP.  In contrast, Borland succeeded in making procedural Turbo Pascal a very good OOP language.  

    The only way for a procedural programmer to understand OOP is to go all in with it.  Dabbling won't cut it.  I believe that's why other commenters are suggesting you don't fully grasp OOP.  It took me two years of exclusively OOP programming in OO Turbo Pascal to shed my procedural shackles (born of COBOL, assembly, FORTRAN, and BASIC starting in 1973).  It also took an open mind and a willingness to remind myself (when I thought OOP was somewhere between stupid and over-rated) that a lot of really smart people saw value in it.  With Turbo Pascal long dead, Java and C# are now my favorite languages.

    It's also a question of scale - there is some scaffolding in OOP that can overwhelm small projects.  If I'm moving a desk a mile, opting for the size and complexity of an 18-wheeler would be foolish.  If I'm moving 10 offices full of furniture 100 miles, opting for a pickup truck would be madness.  Procedural programming - like a pickup truck - runs out of capacity quickly.

  11. "I use BASIC" LOL
    Chris Bross you should really not write anything about programming. Erhmmm on second thoughts keep on writing, you make me laugh!
    You bring me nostalgia man. I started my programming with GW-Basic at 8 years of age. That was Spaghetti code age.... IF CHRIS = FUNNY GOTO 2834... Now that's readable ay?

    • chrisboss says:

      I use PowerBasic. Powerbasic is the offshoot of TurboBasic, once sold by Borland International. They sold it back to its developer, who renamed it PowerBasic. Now when it comes to raw speed and power, few languages come close to PowerBasic, IMO. I do a lot of work with graphics. I designed a GUI engine, which is powerful, but has an amazingly small footprint, well suited to todays Windows tablet PC's. By tapping into the raw Windows API, one opens up the power of Windows. I have written a number of custom controls using the WIndows API, including my own Canvas control with DIB engine (for low level pixel access) with a proprietary 2D Sprite engine (no DirectX required) and even a my own glCanvas control with a proprietary 3D OpenGL based scripting language built in.  So please don't be so naive to think that I write old fashioned GW-Basic style code. 

      A recent reader of one of my articles commented that he overcame his bias as a C#.dot.net programmer and decided to try Powerbasic, and simply put, he was impressed. People see the word BASIC and they automatically think of something like GW-Basic. Sadly, many don't realize how far Basic has come.

      For your info, long before GW-Basic, I first learned programming using a terminal (printer) connected to a college main frame in the 70's. A year later, learned some Fortran where I had to write programs using punch cards and then have them fed into a mainframe computer. My experience is likely far more than you may appreciate.

      • I use Visual Studio 2010 and lately Visual Studio 11 Beta. My code is anything but bloated and easily readable. C# is my favorite. I programmed Basic and C++ before but I find both limited and less readable. I programmed Assembly language, Quick Basic and Turbo Pascal before Windows came around.

        I laughed when I saw these "commands" as part of the PowerBasic feature set:
        BEEP
        CIRCLE
        PEEK
        POKE
        Ohhhh nostalgia man!

  12. smist08 says:

    A few other things to consider when choosing a programming language are: 

    - what is the unit testing framework like, can I do test driven development?
    - how easy is it to write automated tests?
    - what static code analysis tools are available?
    - what is the ecosystem for tools really like, quality and quantity?

    One big improvement in programming has been team and collaboration technologies and processes like Git and Agile development. These probably have more effect than the programming language.

  13. I agree with Chris Boss.  This isn't about BASIC vs. C++ vs. Java, et al.  It's about the ease of mapping a conceptual mental model of a program into source code.  And for many programs, I find that a procedural model is a quicker line between that mental model and finished, working code than is OOP.  YMMV.

    • StockportJambo says:

      Precisely. Mind set. Also the size of the task facing you. When you need a simple console app, you don't necessarily need a large object model supporting it - you need straightforward procedural code.

      Luckily, languages such as C++ and C# provide that mechanism.

      When you need something bigger, and have to deal with larger complexity, that's where OOP shines & BASIC is laughed at (and yes, I've used so-called "modern" BASICs before... such as BlitzBasic and even AMOS (remember that?)).

  14. _jaz_ says:

    Chris, I am 32, I am working as a Java (EE) designer and programmer since 2008 and have been coding since 1996, back then with MS QBasic, visual basic, some pascal and delphi, and most of the time C and C++.

    After (and I would even say while) reading your post, my feeling is that you're getting old.

    I've followed some of your previous posts here at betanews, emphasizing that in the last decade, software has become too bloated (both in terms of features and requirements) which I agree, but you've gone too far.

    You have worked for years on procedural languages, your mind is used to that and you've been able to get a deep knowledge on the tools you use. That's fine.
    Switching (or even understanding as a whole) how OOP works *has* to be difficult to you, given the mentioned facts.

    The first year working with Java, I was usually correcting my colleagues, telling them how they should write it, because they tended to make procedural code inside java classes.

    OOP is not just a different way to write code, but a different way to understand code. OOP can be done in plain C with structs and function pointers. OOP was even added into Visual Basic .NET showing that BASIC could do it.

    At last, I cannot leave without talking about IDEs.
    The things I can do with Eclipse (even though I use a two years old version) to Java (immediate syntax checking, formatting code, refactoring class/method/variable names, jumping from parts of the code to related calls, even on .xml files, search and replace with Regular expressions, and a long list going on...) is just a pain even trying it with a text editor (and there are good text editors).
    Even using Visual Studio feels difficult for some of these tasks!
    (I don't even mention syntax highlighting since right now, even browsing the code on SVN via a browser has syntax hightlighting.)

    Sincerely, your comment about IDEs seems more like a bad experience with something you got than something to be taken seriously.

    On my free time, I've been developing a audio software using C++ for 10 years. Its exe sizes 4.28MB (that includes 1.14MB in bitmaps and icons), and the 70 additional dlls it uses size 1.9MB (dlls are additional sounds and effects).
    So again, I understand your previous posts, but I cannot agree at all about your comments about C++ and OOP in general.

  15. chrisboss says:

    Ok, I am not against OOP. OOP has its place. But rather than OOP simply being a feature of a programming language (Powerbasic supports OOP), it has literally taken over modern programming languages.  There are those who feel that "too much" OOP may actually slow down development and can create new problems. When I see what I can accomplish using purely procedural code, compared to the more bloated apps created today using so call modern languages, it does make me wonder whether OOP has been the boon it was inteneded to be. OOP should be a feature of a language, not be the language.

    Now as far as Powerbasic is concerned, one should note that they sell a DOS compiler (yes, some still use DOS, ie. embeded stuff), a Console compiler and a GUI compiler. I use the GUI compiler.

  16. Casey says:

    I'll say what I've been saying for years.  VB6 was the pinnacle of the BASIC language.  Then MSFT shoved .NET down our throats and trashed the language.  Ever upgrade they try to fix everything they broke and only end up making it worse.  .NET devs *need* intellisense because of the difficulty of the syntax.  My.Computer.Processor.Windows.FileSystem.Drive.Directory.File, etc, etc.  It's a bleedin' nightmare, even with Import statements.  The whole runtime as an API layer thing didn't help either.  We didn't need more bloat and requirements, especially in the desktop programming environment.  It's why I went to Pascal and dumped MSFT.  Bigger != better.

  17. John Crane says:

    OOP and developer studio environments just make it easier to write bad code. Don't blame the tools for the workman who uses them improperly.

  18. Josh says:

    Chris. I totally have to agree with others about these arguments being completely dated and outright invalid. All I can say is if you're not getting monumental improvements with today's languages and tools over what we were doing 10-15 years ago, you've missed something huge. 

  19. agelessdate_com_good_website2 says:

     I've followed some of your previous posts here at betanews, emphasizing
    that in the last decade, software has become too bloated (both in terms
    of features and requirements) which I agree, but you've gone too far.

  20. chrisboss says:

    Maybe because I started programming when computers had so little power, it makes me appreciate the need to get the most from a computer. Todays computers are so powerful compared with what I started with, one should expect a much more "fast and fluid" experience, but sadly we don't. Productivity at the expense of performance, is not productivity at all. Sure, productivity at the expense of some performance is acceptable, but when end users find themselves, "waiting" on the computer to do things, something is amiss. Programmers often don't experience this so much, because they tend to work on bleeding edge PC's, rather than the mass market PC's most users have. How many of you programmers out there work on a PC with say just a Celeron CPU ? Do we as programmers develop software for the cheap, low cost onboard GPUs or does are development system have some kind of high end, gamer style, graphic card in it ? Take an application that runs great on a high end multicore CPU, 4 gig of Ram (or more) and a decent graphics card and then run it on an Atom CPU, with 1 gig memory and an onboard (integrated) GPU and then see what happens.

    • Adas Weber says:

      @google-e4384234af580173a9bafeee1bc02936:disqus 

      Chris, application performance problems, such as users waiting on the computer, are not a fault of OOP or the respective development tools.

      Firstly, all modern programming tools allow developers to handle multi-threading very efficiently and very easily, so that such "waiting" scenarios can be virtually eliminated, if so desired by the developer.

      Secondly, you're forgetting that a lot of application development deals with web service based data, where the speed of the connection is slow in relation to the computer's processing power. Application performance is therefore often constrained by these bottlenecks and modern OOP dev tools allow developers to handle such problem with relative ease. That's why we have asynchronous web service calls for example.

      Your argument is clearly aimed at those who develop small apps which are designed to utilise only the computer's internal resources. But as soon as you start to utilise external resources where you don't have any control over performance or size, application development brings with it a totally different set of challenges, for which modern OOP development tools such as Visual Studio provide the solutions that developers need.

      So intead of seeing things through your simplistic view of application development, try to see it through the eyes of a phone app developer creating an app which must consume external data while still remaining responsive and fluid to the end user. Productivity in terms of app development is very important here, because modern development tools save developers huge amounts of time when developing such cloud-centric solutions.

      • chrisboss says:

        Oh, the fallacy of asynchronous !

        Current programming languages provide a feature of asynchronous calls, so if a task takes some time, simply run it asynchronously. The fallacy is that "all" threading has overhead and actually slows down a computer, not speed it up. Threading is important, but it is not something to be used indescrimently. Read a very good book on WIN32 and Threading, by some experts who know, and threads can actually work against you if not careful. The operating system is already running tons of threads just to have all the services running and the processes (apps). Some apps (like IE) are thread crazy and put extra pressure on the OS. Multi-core CPU's are great, but a couple extra cores, don't compare to the number of threads already running in Windows. Every time the OS switches between threads a context switch must be done and that carries overhead. Now you go ahead and add asynchronous to a programming language and if programmers are not careful, they will abuse this and things will actually get worse. A good programmer knows when to NOT use threads. And yes, I do threading. I work with the low level WIndows API's for threading and incorporated it into my tools.

      • StockportJambo says:

        @google-e4384234af580173a9bafeee1bc02936:disqus 

        Most computers are multi core these days (yes, even Atoms).

        This makes a massive difference to the efficiency of multi-threaded applications. Your argument was valid 10 years ago, but not now.

        What Adas was talking about anyway was the fact that networks are asynchronous, and a large number of applications are 'connected'. An asynchronous web service call has little to do with threading, and more to do with external factors that the developer has no control over.

        Please re-write Betanews website in PowerBASIC. I bet you can't...

      • Adas Weber says:

        @google-e4384234af580173a9bafeee1bc02936:disqus 

        Asynchronous is NOT a fallacy! It allows you to use event driven programming techniques for "asynchronous network connected" applications where you don't have any control over the external data or the connectivity. Putting it simply, an IDE like Visual Studio is extremely productive when it comes to developing solutions like this.

        The fact that you claim async is a fallacy indicates that your argument is based on a very narrow subset of applications. Widen your horizons and you will soon see that your argument simply doesn't hold any water because there are many scenarios where asynchronous event driven techniques are, without question, the BEST way to solve a problem and the BEST way to make an application responsive and fluid for the end user.

  21. Chris Clawson says:

    Good luck getting Windows 95/98 to run on modern hardware. The only way to do it is in a virtual machine.

    • chrisboss says:

      I didn't say run Windows 95 on a computer of today (just imagine the speed if you could). The suggestion was to run it on a computer, which is years past Windows 95, but not necessarily current. When I first bought Windows 95, I had it running on a 100 mhz CPU with 8 meg ram. Years later, when XP and Vista were the more current OS, I had a 500 mhz CPU computer w/ 256 meg ram, which was old, but beyond what W95 was running on and installed Windows 95 on it. What a difference ! The point is the the advances in hardware today is often lost because the OS grows with it and one does not see much improvement in speed (performance).

      • Chris Clawson says:

         Well, yes. Sorry if that came off as snide, it's just that I've tried to load Win 98 on a modern PC, and it just can't be done. Win 95/98 doesn't understand PCI Express (which connects almost the entire system) but there are no drivers for it which means no sound, no networking, Windows 3.1-era graphics and a dog slow hard drive. Oh, and no CD-ROM either, so have fun loading Windows from floppy disks (assuming you even have a floppy drive.)   ;)

  22. chrisboss says:

    Adas Weber,

    My point about Asynchronous events is not that it does not work, but that it is not a perfect solution. Threading, whether using a managed language via Asynchronous events or threading via the Windows API (WIN32) is no different. The only difference is that the language impliments threading so it is easy, compared to the more difficult API method. The problem with threads of any kind is there is extra overhead when doing a context switch. Now with small blocks of threaded code, used once in awhile, this may not be a problem. But if threading is used too much, even a multicore CPU still has to do a lot more work to keep up. Remember too, that Windows is a multitasking OS and there are already plenty of threads running at any given time. Windows 7 is already overloaded with too many services running in the background.

    So no matter whether one uses Asynchronous via a managed language or threading via the API, the same considerations still exist. Asynchronous (or threads) are not a magic solution to performance problems. The solution to performance is fast code (better generated machine code), less code (less machine code to execute). An excellent book on the subject is "Multithreading Applications in WIN32" by Jim Beveridge and Robert Wiener.

    Remember too, the Asynchronous in a managed language is ultimately using the API threading methods I use, but under the hood (the language handles it, rather than you use the API's).  Just because I use the low level WIN32 API's rather than a managed language Asynchronous does not mean I have no knowledge of how threading works.

    • Adas Weber says:

      I never said you had no knowledge about how threading works. I'm simply stating that your views are based on standalone apps where performance and size are the primary criteria, whereas that is only a small subset of applications that are developed, especially nowadays where data sharing and connectivity are primary criteria in many solutions.

      Suppose I want to call a web service and I don't want to wait while the web service does what it needs to do. The recommended way of doing this is to call the web service asyncronously, and when the call is completed an event will fire which I process in the event handler. This is standard practice event driven progarmming using asynchronous web service calls. It allows my program to do other things and not be blocked by the waiting for the web service call to complete.

      So, can you please explain to me why this is bad? Can you also explain to me how you would implement this in PowerBASIC in a way that would perform faster and require less coding than how I would do this in VB.NET? Before you answer, remember that the performance bottleneck in this example is the web service, so no amount of Win32 API magic is going to speed up that part of the application.

    • Adas Weber says:

      @google-e4384234af580173a9bafeee1bc02936:disqus 
       
      Have a read of this -
       
      http://windowsteamblog.com/windows_phone/b/wpdev/archive/2010/09/13/building-high-performance-silverlight-apps-on-windows-phone-7.aspx
       
      This is about optimising for performance on cheap, low cost, on board GPU devices - not some high end stuff! The presence of a dedicated GPU means that threading is required to take advantage of it.

    • Adas Weber says:

      @google-e4384234af580173a9bafeee1bc02936:disqus 

      I never said you have no knowledge of threading. I simply said that your views were narrow because you are only considering standalone apps where the primary criteria is performance and size, whereas that is only a small subset of the modern applications that we develop.

      As for asynchronous programming...

      Suppose I want to call a web service but don't want to wait for it to complete. Asynchronous techniques do indeed provide a magic solution, because all I have to do is call the web service and process the event handler when the event fires that the web service call has completed. So my program is not held up waiting for the web service call to complete and can do other things in the mean time.

      Here the bottleneck is the web service, so no amount of Win32 API magic will make it any faster.

    • StockportJambo says:

      Usually when I have a number of small tasks that would be better running in threads, I create a thread pool which then has minimal set up on a case-by-case basis, thus minimising the extra overhead that comes with creating an individual thread task. I'm no threading expert, but I find that works pretty well.

      You say that Windows 7 is "already overloaded with too many services running in the background". How many is too many? It's only too many if it's more than you actually need. This site has an excellent guide on determining what services you can disable safely.

      http://www.blackviper.com/service-configurations/black-vipers-windows-7-service-pack-1-service-configurations/ 

  23. chrisboss says:

    Be careful how you take my comments. I never said one should never use threads (or asynchronous events. They have a purpose. At times there is a background task (ie. polling a device like a serial port or downloading data via TCP in the background) where a thread is an absolutely necessity.

    The problem is that when threading is made easier via a programming language (ie. managed languages asynchronous), it is easy for programmers to abuse it. Threading was not meant to solve performance issues. It was designed to allow background tasks to be run, which take a long time to execute, so the primary UI thread is not bogged down.

    What I have seen, even with API programmers (WIN32) is that if one thread seems like it solves an issue, the programmer easily thinks that more threads will make things even faster. Actually they do not. There is overhead for every single thread run in Windows and the more threads running, the slower things can get. The book I mentioned in another comment discusses this in detail and it recommends using threads judiciously.

    The reason I used the term "fallacy" when it comes to asynchronous, is because of some programmers misunderstanding of the Windows threading model and how threads have overhead and should be used very carefully. By Microsoft adding asyncronous events to their managed languages, while it can be a good thing, it also opens up programmers to all sorts of problems. Just because it is "easy" to use threads now, does not eliminate the dangers of multithreading. Threads should always be used as a last resort. If the only way to accomplish a task requires a thread, then likely it makes sense. If the task could be done, while maintaining performance, using only one primary thread (the apps process), then threading does not improve performance, but can actually decrease it.

  24. Kristina says:

    Please enough with Microsoft paradigms.

    For CPU bound tasks, C++ compiled using GCC under a UNIX based system like Linux or BSD is where it's at.

    OO is more than just a different way of thinking, it allows you to separate and isolate all the workings in a component named 'remarkably' after similar functioning object existing in real world environments. It also allows the user to protect and hide data  members (functions/variables) for the safety and simplicity resulting in an encapsulated system. 

    Such as including MySQL in a C++ project allows data exchanges to occur by dealing with objects, which you expect to just work its own way without any need to understand the underlying implementation. It has become one object can be made interchangeable with others when dealing with a certain system from any kind of application.

    I think you struggle when visualising code and create designs for anything except purely sitting down with very little planning and coding.

    These days we write far more complex (but more capable,  resilient) systems making less crucial mistakes (like bluescreen in a certain OS *Win 95/98) and isolating buggy elements.

    Just stick with standards and hope they know what they're doing, much time and money has been spent developing what we use today, in an ideal world it just works out better.. These clients must be mugs to use such outdated concepts and integration of such software.

  25. Adam Richardson says:

    Nice article. I especially appreciated your perspective on programmers' obsession with bleeding edge technology.

© 1998-2020 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.