Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I do think that pushing more metadata into the header file is probably a good thing. For example in ObjC, which gives more information about the expected behavior of an interface?
This:
Code:
- (id) foo;
- (void) setFoo:(id)newFoo;

or this:
Code:
@property (readwrite, copy, nonatomic) Foo *foo;

I hesitate to overload the property metadata syntax further, but I can imagine all sorts of other metadata I'd like to attach to things (some of it currently available using GCC's __attribute__(()) syntax); non-nil, persistent, undoable, nonblocking, legal value ranges or predicates, pre/post conditions, etc...

Having the compiler enforce your API contract when possible is a pretty appealing idea.
 
Having rather embarrisingly bashed UML I find myself in a position that it would actually be really useful.

I am porting a C++ library to C# and it would be a great help if I just had a diagram for the public interface so that I can make sure I implement that in the same way as the original library and then I can do the behind the scenes stuff in my own way.
 
My point was that with added design time required for OOP are programmers really more productive? I mean UML is a case in point. How many times do you hear about programmers going over board with all these new object diagrams and laying out all their classes in pretty trees etc etc. Yet does this have a tangible effect? Does it create more stable software? Has software development complexity increased because of higher level languages or mearly just because computers have got more powerful?
as a counter to all the uml bashing going on here. a company that i used to work for was known for coming with specs on the back of a napkin and constantly turned out less than optimal code overtime and over budget. we adopted uml, or at least a subset thereof, and wrote use cases for the required functionality and created sequence diagrams defining the objects and their interfaces. code generation was used to create stubs, and from time to time, the code was fed back into the tool to reflect the occassional modifications. it turned out to be the smothest project that i ever worked on, and came in under budget. having a good design methodology in place is very important imho. whether that's a big uml design, or working in a tdd agile style isn't important just as long as the processes are in place, well understood by the people using them and used in the right way to make a good piece of software. trying to be all lightweight and agile is no use either if you screw it up.
 
as for the thing about software being bloated, developer time costs a lot more than processor time. as long as the software does the job in a suitable time frame, keeping the code base maintainable and cranking new releases out quickly are more important than super optimal code.

and the point about the huge difference in the amount of work that a computer needs to do to get a bit of text on the screen in a 20 year old os compared to a current one is very significant too. there is a lot going on at every stage these days that we take for granted.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.