mrichmon said:
A compiler error indicates the nature of the error. Reflecting on the error and finding the problem and resolving it is a very valuable learning exercise for people learning how to program. An IDE such as Eclipse eliminates this process by marking the specific part of code that contains a problem. As I stated in a previous post, many people learning to write code end up making random changes rather than trying to understand what is going on in their code. An IDE that points to the problem area of code reinforces this approach to coding.
Do you have any logical arguments to back that up? I am hearing a steady stream of 'expert' opinion stated as fact, but very little in the way of reasons
why something you are saying is true. Why does having an IDE pointing out where a mistake was made encourage voodoo programming? How is it at all functionally different if I compile in a terminal and get a message like "line 35: '(' expected)" and if the IDE automatically compiles the file, highlights the line and presents the same message the compiler does?
I would be against pair programming for novice programmers but that is a very different discussion. I would also be against test-driven development as an approach to teaching programming to novices. Are there any other trendy approaches to software development that you want to suggest are relevant to teaching programming?
Trendy, eh? Pair programming is simply two people working together, side by side on a problem. That's been going on since pre-history.
To answer your "why not?" question: Because novice programmers easily get lost in the libraries. Novices often also get confused between the language and the library and it is useful for them to understand the boundary and the distinction.
Yes, perhaps you could point out the boundary between a language like Java and its class library. Because, if I was considering someone for a Java position and they
didn't know a solid portion of the JCL, I would believe they wouldn't know Java.
Learning a new language for an experienced programmer is at most 2 weeks of work.
As defined by you as 'learning the syntax, expression formats and the statements' I presume. Which, I will point out, does not result in learning a language in a sense that the vast majority of people mean. If that were the case, then most C++ programmers would have almost immediately understood and been useful with Java because (I would estimate) 95% of it was a subset of C++. Except that in reality it takes good programmers much longer because they need to know the library as well. In addition, they need plenty of time and exposure to pick up on Java conventions, styles, patterns and antipatterns and technologies (like RMI, JSPs and servlets, JNI, Web Start, jar files, signatures, JDBC, etc, etc, etc). 2 weeks indeed. I would not trust the code of a programmer that thought they were done learning Java after 2 weeks.
Auto-completion and constant compilation have nothing to do with QA nor documentation. Neither auto-completion nor constant compilation produce any kind of documentation. The fact that code compiles does not mean that the code has any quality assurance whatsoever. Compilation of code doesn't even mean that the code will necessarily function let alone conform to the program requirements.
I would assert (get it?) that if code doesn't compile, it's pretty likely it won't meet any kind of software requirements.
As someone who has been on the receiving end of commits (in more than one company) that broke the build and wouldn't have even compiled on the committer's computer, let me express clearly that compilation of code is an important, basic QA step.
I was not talking about producing documentation; I was talking about showing you the reference documentation (and even the method signatures) inline.
Look, 35 years ago a class of programmers were up in arms about the new fangled 'code on the machine' practice that was becoming popular. Programmers need to write (or punch) it out by hand to properly learn how to program, they felt. They were distrustful of the 'trendy' teletypes and interactive editing that
went on to revolutionize software development. But, other programmers saw strengths (and weaknesses) in these
technological advancements and adapted. And, then it happened all over again with stepping debuggers, and IDEs, and the internet versus paper reference manuals, and....
You're going to believe what you're going to believe and it seems to me that your mind is made up. But, for new programmers out there I have a message: there is no need to make programming more difficult than it is already. If you've made the choice (or were required to make the choice) to start with a language like Java rather than a kinder, gentler language like Haskell or Python then purposefully depriving yourself of inline documentation or debugging clues is, honestly, pretty masochist. And, one more message: never believe a programmer just because of 'n' years experience.
<what's funny about dismissing continual compilation is that many (maybe even most) programmers have been doing it for years manually -- code a bit, save, compile, rinse and repeat>