Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Cromulent

macrumors 604
Original poster
Oct 2, 2006
6,824
1,126
The Land of Hope and Glory
So what is your take on the current situation? Just about every day as I am reading important websites related to programming I hear about a new idiom that seems to be "the next big thing" reducing the amount of time spent programming while still maintaining easy to debug code.

Yet in the last 20 years computer hardware has grown in power exponentially and yet software has not shown the same increases in speed as one would expect from such increases in hardware. This must mean one of two things, 1) software has become more complex (meaning it has more features) or 2) programming techniques have become so sloppy that we actually need that increase in hardware power to keep our software working at a reasonable speed.

Personally I am of the opinion that it is a mixture of the above, software certainly does do a lot more than it did 20 years ago but that does not necessarily account for the relatively static speed increases that software gets when compared with the underlying hardware.

I am interested in your opinions on this.
 
Hardware has become faster primarily through the use of smaller transistors which allow faster switching speeds.

Software is based on algorithms. Fundamentally, an increase must come from a faster algorithm. Otherwise, you are just executing the same number of steps in an algorithm on faster hardware.

Programmer efficiency can be measured in SLOC (single lines of code). Of course, higher-level languages use fewer lines of code to express the same functionality. Someone has probably expressed programmer productivity in terms of "functionality" irrespective of lines of code. I think that higher-level languages also increase productivity. (Though you'll probably have more bugs per lines of code in high-level languages.)

Anyway, over the years I would say that:
1. software is more complex (i.e. provides more functionality)
2. programmer productivity has increased through the use of high-level languages and ability to integrate with other software (like databases or GUI front-ends).
3. Few new algorithms reach the masses of programmers.
 
I don't think programming has become slopified in proportion to Moore's law, there has to be some other effect going on. I'm no expert, but I suspect that at least some of the gap we're seeing has to do with all of the work that the higher layers of abstraction in the frameworks are doing.
 
Hardware has become faster primarily through the use of smaller transistors which allow faster switching speeds.

Software is based on algorithms. Fundamentally, an increase must come from a faster algorithm. Otherwise, you are just executing the same number of steps in an algorithm on faster hardware.

The hardware is the determing factor in terms of speed though. If you found a 20 year old program and managed to get it to run on todays hardware then it would obviously respond faster and complete the tasks at a faster rate. Therefore it is not just based on algorithms, executing an algoritm faster or executing a new faster algorithm basically amount to one and the same thing in the end. A general speed up of the software being run.

Programmer efficiency can be measured in SLOC (single lines of code). Of course, higher-level languages use fewer lines of code to express the same functionality. Someone has probably expressed programmer productivity in terms of "functionality" irrespective of lines of code. I think that higher-level languages also increase productivity. (Though you'll probably have more bugs per lines of code in high-level languages.)

This is exactly my point. Programmers now can write code that would have taken weeks in the past in a day or less. But that code comes with huge overheads. Just look at the Cocoa frameworks, to a developer just writting an application they are simple and easy to use, but every time your application makes a function call it is executing vast amounts of code (comparitively) that you have never seen. 20 years ago just about everything was hand crafted and optimised.

Anyway, over the years I would say that:
1. software is more complex (i.e. provides more functionality)
2. programmer productivity has increased through the use of high-level languages and ability to integrate with other software (like databases or GUI front-ends).
3. Few new algorithms reach the masses of programmers.

Thanks for your opinions on this.
 
Programs are doing vastly more work in ways that might not be immediately apparent; Consider displaying text, as textedit does:

Original Mac:
Generate and position glyphs (bitmap font, so that's just blitting pixels from the resource manager)
Linebreak (primitive algorithm presumably)
Blit to low res monochrome screen during refresh interval (fullscreen was about 43kB assuming proper packing, 175kB if byte aligned)


Modern Mac:
Generate and position glyphs (opentype outline font w/ hinting, subpixel antialiasing, ligatures, optional character selection, etc...)
Linebreak and kern (near-TeX-quality algorithm)
Blit to 32 bit high res backing store (fullscreen on a macbook is ~4.3MB)
Upload backing store to GPU
Composite backing stores with those of other windows (including alpha blending)
Render composited texture to GPU's framebuffer
Blit framebuffer to display
 
Programs are doing vastly more work in ways that might not be immediately apparent; Consider displaying text, as textedit does:

Original Mac:
Generate and position glyphs (bitmap font, so that's just blitting pixels from the resource manager)
Linebreak (primitive algorithm presumably)
Blit to low res monochrome screen during refresh interval (fullscreen was about 43kB assuming proper packing, 175kB if byte aligned)


Modern Mac:
Generate and position glyphs (opentype outline font w/ hinting, subpixel antialiasing, ligatures, optional character selection, etc...)
Linebreak and kern (near-TeX-quality algorithm)
Blit to 32 bit high res backing store (fullscreen on a macbook is ~4.3MB)
Upload backing store to GPU
Composite backing stores with those of other windows (including alpha blending)
Render composited texture to GPU's framebuffer
Blit framebuffer to display
Spot on. Programs aren't more complicated than they were twenty years ago, they just use libraries that the programmer never realizes (or never needs to realize) are absolutely massive in scope.
 
I've started to reply to this about 4 times now, and have always given up because i haven't been able to organize my thoughts very well. I'll try to aim for brevity, and hopefully that will help.

The overhead of running some bit of assembly to display "Hello, World!" on a bare CPU is 0.
The overhead of running a class called HelloWorld that uses System.out.println to display "Hello, World!" in Java running on a JVM running on an OS that's perhaps running on a Hypervisor is dramatic.

So why would anyone opt for all of that ridiculous overhead? There are many reasons, but being able to run easily on your user's OS is a big one. The complexity increase of the OS over the last 25 years has been profound. Preemptive multitasking, virtual memory, advanced APIs, abstraction of every piece of hardware, etc. All of those things make things much easier for a developer to build new software without worrying about the intricacies of the target machine, building a lot of redundant data structures of their own, etc.

Another point, though, is... how fast do you need it to be? Is the software responsive? If so, does it need to respond in .05 seconds instead of .08 seconds? At what cost to reliability, maintainability, etc.?

I don't know that programmers are really getting lazier, they just have options to use high level tools. Ignoring safer, easier to use high-level frameworks, languages, VMs, etc. because they are marginally slower would be pretty stubborn. If there is a performance-critical application or portion of an application, using native instead of interpreted code in this case, etc. might be prudent, but writing in the lowest level because there might be something performance intensive seems foolhardy.

As I feared, this has meandered without any central focus. Basically I don't feel like machines are much slower or less usable now than 10-15 years ago because of software bloat. I don't have the expectation that websites will load 5x faster because the clock speed of my machine is 5x faster. There are so many things that are disk/network/etc. bound, and those things have gotten faster more gradually than than CPUs, memory, etc.

-Lee
 
...
Personally I am of the opinion that it is a mixture of the above, software certainly does do a lot more than it did 20 years ago but that does not necessarily account for the relatively static speed increases that software gets when compared with the underlying hardware.

I am interested in your opinions on this.

Hi Cromulent

I'm not sure you can generalise this to all parts of the industry. For example, computer games tend to squeeze as much performance out of the hardware as possible. Software might lag behind when new hardware is introduced, eg when a new console is introduced, but it soon catches up. Another area would be image manipulation, programs like Photoshop will be highly optimised in key areas - it's just that we are editing megapixel images with multiple layers and expecting unlimited number of undos that make it seem like CS runs slower than the good old days of Photoshop 5 or whatever.

On a more general note, the way I see it, new languages and programming idioms don't really offer anything new. They just let you specify the same old things using a different syntax and if you're lucky in less lines of code. But that's because they can't offer you anything radically new. We're still typing away on our Turing machines which have been around since day 1. I can't see there being a giant leap forwards on the software side until there is a corresponding major change on the hardware side.

ß e n
 
Yet in the last 20 years computer hardware has grown in power exponentially and yet software has not shown the same increases in speed as one would expect from such increases in hardware.

Is this actually true? I think Catfish_Man's example shows that there is actually a lot more done for a "comparable" result.

This must mean one of two things, 1) software has become more complex (meaning it has more features) or 2) programming techniques have become so sloppy that we actually need that increase in hardware power to keep our software working at a reasonable speed.

I think that 1 is true but it is a side-effect of the fact that we are simply throwing more complex problems at computers. Look at search algorithms, bioinformatics, weather prediction, image processing and a whole host of machine learning problems. Increases in processing power and storage mean that we are able to model in much more detail the real world problems we aim to solve.

I don't really think it is down to sloppiness - people just realise they can get more done in less time on their part by re-using the work of others (which is necessarily generalised). There is certainly space for optimisation and improvement, it is just less of a priority.

On a final note, I have just realised that 20 years ago was 1989, which feels frighteningly recent :eek:
 
On a more general note, the way I see it, new languages and programming idioms don't really offer anything new. They just let you specify the same old things using a different syntax and if you're lucky in less lines of code.

Safety is more important than conciseness. Use a modern high level (garbage collected, no raw pointers, bounds-checked arrays) language and you're automatically immune to several whole categories of crashes and security issues. Use a language like Haskell and you're protected against a few more.
 
I think what he meant was that even though hardware speed doubles every 18 months, computers don't feel that much faster; thus software inefficiencies and programmer sloppiness have also been doubling every 18 months. Well, yes, using libraries IS inefficient, in terms of processor cycles, but it reduces the time programmers need to produce software.

The increase in speed of computers means:
Programmers have more time.
Users can have software that looks better or has more functionality and is easier to use.

But the users won't be able to notice significant speed increases in day to day use, unless he ran like a password cracking program, which isn't day to day use. In other words, SimpleText 15 years ago on Performas ran at similar speed as TextEdit on Intel Core 2 Duos today, despite processor speed doubling every 18 months or so.

Probably because users will always tolerate the computer responding in 50 ms (Figure Taken Out Of Thin Air), and always had, since the first PC. It just means software producers pack more things into those 50 ms rather than reducing the time needed for the computer to respond.
 
Programmer efficiency can be measured in SLOC (single lines of code).

I wouldn't call it efficiency.

Measuring programmers with a SLOC per day metric is one reason that Microsoft programs have a bit of code bloat.
But if its fast enough its good enough.

Algorithms generally improve speed assuming hardware speed is kept constant. Some algorithms trade size for speed.
 
I read an interesting viewpoint recently (in Kent Beck's "Implementation Patterns"): When programming, you should keep in mind that you are communicating; not with the computer, but with the next programmer who's going to look at your code.
 
I don't mean to de-rail this discussion and turn it into a debate on Wil Shipley's personal coding philosophy, but on the off chance you have not come across this article, you should give it a read. It's a bit counter intuitive, and at the same time brilliantly practical.
 
Gah, I can't believe I forgot about this thread. Doh!

The article that GorillaPaws posted is actually very interesting. I like the part where he says "write every line to be bulletproof" especially.

Sometimes I wonder where all the bugs come from in programs. Some are obviously hard to find and would cause anyone a problem but other times bugs are just so mind boggling simple you wonder what the programmer was thinking when they wrote the code.

Going back to my original point (which having looked at it again I don't think I explained very well) what I was trying to say is that whilst modern languages and idioms offer much more safety over older languages they don't necessarily offer increased productivness.

Lets takes the object orientated paradigm as everyone knows that. The idea is that you treat each class as an object and each method is the means by which you interact with that object. Fine. But that paradigm also requires much more design time than a standard procedural language. There are numerous ways in which one could objectify an application and each one has their pros and cons. In order to come up with a rational design you need to spend time thinking about it.

In procedural languages such as C the emphasis is on data and how you manipulate it. One uses functions to manipulate the data and each function does something different, therefore it works on actions. Be that internal or external actions.

My point was that with added design time required for OOP are programmers really more productive? I mean UML is a case in point. How many times do you hear about programmers going over board with all these new object diagrams and laying out all their classes in pretty trees etc etc. Yet does this have a tangible effect? Does it create more stable software? Has software development complexity increased because of higher level languages or mearly just because computers have got more powerful?
 
My point was that with added design time required for OOP are programmers really more productive? I mean UML is a case in point. How many times do you hear about programmers going over board with all these new object diagrams and laying out all their classes in pretty trees etc etc. Yet does this have a tangible effect? Does it create more stable software? Has software development complexity increased because of higher level languages or mearly just because computers have got more powerful?

I don't know the answer to this question, but from what I have gleaned through reading a lot of blog posts and listening to podcasts etc. is that you see the productivity on the back end years later when you're trying to extend and update your program (at least in theory). I have also heard the argument that code re-use is a bit over-hyped by the OOP camp. The reason being that code re-use is often difficult to pull off because most code tends to be app-specific in nature anyways, and taking the time/energy/resources to design things for the purpose of future re-use is actually slower than just re-writing it for the 2nd app (a related point to the question you're asking I think).

I don't have any experience/knowledge to have a useful opinion on this question, but I thought I'd throw out some of the arguments I've heard. I'd be interested to hear what others think about these issues.
 
My point was that with added design time required for OOP are programmers really more productive? I mean UML is a case in point. How many times do you hear about programmers going over board with all these new object diagrams and laying out all their classes in pretty trees etc etc. Yet does this have a tangible effect? Does it create more stable software? Has software development complexity increased because of higher level languages or mearly just because computers have got more powerful?

No, it does not. The problem with large upfront design is that it assumes nothing will ever change. Businesses are dynamic. Rarely does a person know exactly what they want at once.

More often than not, that pretty UML diagram they've created will become outdated within a week or two. OK. Great. So we need to make a few changes. Adjust a few classes, change the database a bit and the UI and we're back on track. Well, how do places know all of the stuff still performs correctly? "Well, the business user went in there and tried doing their stuff again and it worked." Really. Every combination? "Yeah, they're great at testing". Unlikely.

Automated tests that prove your specifications are huge while maintaining and developing a sizable application. How many places actually do this? Not many.
 
My point was that with added design time required for OOP are programmers really more productive? I mean UML is a case in point. How many times do you hear about programmers going over board with all these new object diagrams and laying out all their classes in pretty trees etc etc.

OOP doesn't require any of that stuff*. For the problems it addresses, it really works quite well. People build all kinds of religions and processes and crap on top of it though. Also it doesn't apply to all problem types; I find it works best for UIs and 'glue' layers, and not so well for computation-heavy parts of programs.


*and in fact deep class hierarchies are considered a warning sign of poor design in many OOP languages; Composition is more flexible than inheritance.
 
Where do bugs come from?

My current project is a scheduler which consists of these components: timer wheel, state machines, packet generator, input queues, memory pool, etc. I am writing in a procedural language, C, but the above components are mapped to different files. As it turned out, I re-used code for the packet generator, input queues, and memory pool from an earlier project.

I made a bug when I re-used the memory pool because I modified the code to support 2 types of objects. When I copied and changed this code, I used a copy of a parameter (a pointer) when I should have used a pointer-to-pointer. This led to the memory pool always returning the same memory. That led to a circular linked list. Anyway, it took me a 2 days to track down this bug which was only noticed when the circular linked list caused too many packets to be generated.

So what led to this bug? I would say that I took the old code (which I wrote myself a few months earlier) and thought I understood how to modify it properly. And I did use it properly and changed several parts of it, except for the failure to use a ptr-to-ptr. It didn't help that the buggy behavior depended on input timing. The bug was easy to fix once I understood why it occurred, but was hard to figure out.

BTW, programmers use memory pools when they don't want to rely on the usual dynamic memory mechanisms (new/delete or malloc/free). This is due to fear of memory fragmentation, poor performance, or non-deterministic garbage collection, etc. It may or may not be true, but if there was a memory management library that provided the right characteristics I could have saved some time.

Now a simple bug was in my packet generation routine which sent a packet with a field that was a ethernet MAC address. It had to be in network byte order and came from a 64-bit integer. Well, I did some bit shifting operations and masking, but used the mask 0x0F instead of 0xFF. It was simple to find and fix. Why did it occur? Well, I would say it occurred because I didn't think much about this line of code which was relatively simple. I masked 4 bits instead of 8 bits. Basically, I had a 50/50 chance of getting it right. It cost me about 15 minutes to find and fix the bug, but if I had spent another 15 seconds thinking I would've gotten it right.

One thing about these bugs is that their discoverability is so different. The first one depends on timing and the second one immediately fails basic functionality (as seen by the packet receiver).

Anyway, my project (less than 10k lines of code so far) will undoubtedly have many more bugs. And after we release it, some "simple bugs" will be ignored until there is some newer feature or important bug fix to necessitate a new release. BTW, no one here needs to worry about getting my buggy software. It's not used by individuals.
 
OOP doesn't require any of that stuff.

Quoted for truth.

I recently took a course where the instructor insisted on us using UML diagrams for solving the exercises. He simply couldn't believe that me and my colleague wrote down the solution in code using pencil and paper, and then extracted the UML from the code.

I sometimes get the feeling that the "not so bright" programmers can "hide" behind the UML intricacies and grill you over some line being dot-dot-dashed and connected via a black diamond on one end and an open arrow on the other.

Diagrams are most useful for the first five minutes when showing someone around in a new (large) project. "Here's the Widget manager, it owns a collection of Widgets, then you have the FooController, which sends events to the Observer, and that's where I think the problem with the life cycles are. Here's the FooController.h, and here's Observer.h"

On the other hand: Programmers are like car drivers. Every one of them is better than average.
 
I'm pretty sure people were writing really sloppy code 20 years ago too *cough*Windows 3.0*cough*. If it's just in a programmers personality to be sloppy, then s/he will be no matter what language or IDE tools they are using, but I would hope that modern API tools have helped a little bit here, with code completion, auto-documentation scripts, syntax highlighting, tidying of braces/indentation, better compiler warnings, etc.
 
BTW, programmers use memory pools when they don't want to rely on the usual dynamic memory mechanisms (new/delete or malloc/free). This is due to fear of memory fragmentation, poor performance, or non-deterministic garbage collection, etc. It may or may not be true, but if there was a memory management library that provided the right characteristics I could have saved some time.

In procedural languages, where you can get into a lot of trouble, memory pools are probably the best way to do it. In OOP, I'd argue that if you have those sorts of fears, you are better off using object pools than ignoring the usual memory mechanisms. In C++, you can do all sorts of interesting things with the STL and your classes so that you can create a pool of common objects ready to go if you cycle through a bunch of them constantly creating/destroying them. In Obj-C, you can do the same things with the bonus that basic ref counting is already provided.
 
I recently took a course where the instructor insisted on us using UML diagrams for solving the exercises. He simply couldn't believe that me and my colleague wrote down the solution in code using pencil and paper, and then extracted the UML from the code.

I sometimes get the feeling that the "not so bright" programmers can "hide" behind the UML intricacies and grill you over some line being dot-dot-dashed and connected via a black diamond on one end and an open arrow on the other.

UML is supposed to define interfaces between different objects. Someone should be able to give you a UML diagram and say "this is how your object should interface with the world around it, now go and write code that implements it". The intention is that if the UML designer designs all the interfaces correctly, and each coder creates an object exactly according to the UML diagram, then the bits just fit together. Nice for really big projects where it is necessary that things fit together even if they are written by coders who never talked to each other.

Now it seems that this instructor gave you the interfaces in some other non-UML form, expected you to design a UML for your solution and then the solution. That is plain stupid. What would make sense would be to give you a verbal specification and ask you to turn that specification into a UML diagram, then give that UML diagram to someone else and let them implement it. UML is for communication. In your situation, the UML served no purpose. Creating a UML diagram for how your code works serves absolutely no purpose.
 
UML is supposed to define interfaces between different objects. Someone should be able to give you a UML diagram and say "this is how your object should interface with the world around it, now go and write code that implements it". The intention is that if the UML designer designs all the interfaces correctly, and each coder creates an object exactly according to the UML diagram, then the bits just fit together. Nice for really big projects where it is necessary that things fit together even if they are written by coders who never talked to each other.

Exactly. There are even tools which can turn UML into real code. You have to do it this way, or the "design" and the "implementation" start growing apart. Everbody knows that when you separate documentation from code, the documentation will always be out of date.

My point is this: The UML spec is an example of "design by committee". The spec is huge. The original idea that a bunch of boxes and lines communicate ideas more easily than a printout of a header file was probably valid, until they took the idea too far. If I have to remember what dotted versus dashed lines mean, what closed versus open diamonds mean, and also have to take the "stereotyping" into account, I might as well learn what "public" versus "private" means and what a plain pointer versus a refcounted pointed is. The instructor (a big proponent of UML) actually admitted that the tools all implement a different subset of UML, so that the danger is that you "hang up" your entire code repository to a single vendor's implementation. There is also the matter of tracking changes; in text-based coding I can use very basic revision tracking to see that bug #4234 was solved in revision #1234 by breaking a life-cycle dependency by changing a refcounted pointer to a weak pointer in header file blah.h - I'm not aware of any tools which can quickly show me that a diamond went from black to white (or was it the other way around?) in a certain UML diagram in revision #1234.

I would much rather write down the public part of a class in a header file and then send that off to be implemented by someone else. Why add an extra layer (in this case the boxes-and-lines layer) which can introduce errors and misunderstandings? In fact, if you substitute "header file" for "UML diagram" in the paragraph above, I would be 100% behind it...
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.