Not really relevant here, but you could remember it in the future.
When manipulating characters from Strings in Java, if the String is longer than about 300 characters, getting an array using toCharArray and iterating over the array is faster than using charAt. charAt is slightly faster than toCharArray with shorter Strings. (That's since JDK1.4 - with earlier versions the limit was about 15 characters)
This whole discussion reminded me of a change in Java 1.5. Apparently the new Unicode 4 standard adds more characters than would fit in 16 bits. So, Java 1.5 now supports UTF-16. So basically, if you want to be completely correct, you can no longer assume that a
char primitive is sufficient to hold a character. In retrospect, everyone else using UTF-8 now sounds pretty smart. The point is, that the optimisation of getting the source
char[] is no longer a good idea. If someone had used the simple
String.charAt(int) and
Character.isLetter(char) approach, then they could simply replace those with
String.codePointAt(int) and
Character.isLetter(int). Well, to be honest, they'd also have to change the looping to depend on
Character.isSupplementaryCodePoint(int). Of course, you could just use the Character methods on the
char[], but I think it makes the porting issue less straightforward.
String.codePointAt(int)
Character.isLetter(int)
Character.isSupplementaryCodePoint(int)