Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

1458279

Suspended
Original poster
May 1, 2010
1,601
1,521
California
After seeing so many WWDC videos showing advantages of Swift that don't look like they'll be added to ObjC, I decided to dig a bit deeper into Swift.

I noticed a few goofy things:
1. named parameters in the function calls that includes not naming the 1st param.

2. inout keyword, looks like a way to remind people of pass by ref vs value.

3. putting the function return type at the end of the function, whereas many popular languages put the return type at the start.

Although I'm just starting to dig into Swift, it really looks like it's a move in the wrong direction. I've heard some say it was more inline with a mobile development language, but I fail to see how these things have anything to do with mobile dev.

It looks like odd solution to something that really wasn't a problem, instead of trying to leverage knowledge people might have with other languages and making things more universal, they've taken a route that offers no clear advantage to the developer.

I haven't got into the OO part of the language, maybe they've made better choices there, but at this point, Swift does not look like a more natural or better suited for mobile development.

The playground or interactive part looks great, and I hope they added some better OO stuff, but it doesn't look great so far, but looks like we might not have a choice much longer. Maybe I just haven't give it enough time yet...
 
Some features of Swift or good, but others I'm wondering - why? Such as optionality / unpacking - ? and !

Seems to add unnecessary complexity, and doesn't prevent you accessing undefined variables. It isn't particular elegant.

Swift 1 - the pyramid of doom,

Also :

if let data=optionalData{
//code
}

where as in other languages it would be:
if optionalData != nil {
}

Having an assignment to test for a 'null' value looks odd coming from other languages.
 
Last edited:
These sounds more like stylistic complaints rather than foundational issues, which are pretty arbitrary and just "change". At first, I had the same reaction when comparing things like this but difference isn't inherently negative in absence of obvious or justified positive or neutral changes, and difference doesn't need to necessarily lead to a solution on all avenues in a case like this. I think as you dig deeper into Swift, you'll grow to like it as I did.

My biggest issue with Swift, because I more-than-love the language, is that it has random and bizarre issues when working with a lot of frameworks that are still written in Objective-C and rely on directly touching the Obj-C runtime to operate correctly. The best example is Core Data, which they managed to hodgepodge into the Swift runtime with strange oddities, such as non-uniform function calls, inconsistent error reporting (Error reporting vs throwing), and strange, Obj-C like syntaxes such as @NSManaged.

But in all, I rather like the signatures for calls in Swift ;)

Edit: I'm also going to add that I was familiar with Rust and Haskell before working with Swift (which was heavily influenced by those), so I'm a bit biased.
 
Last edited:
I guess the overall issue is how does Swift actually improve anything?
Does using the inout keyword as a way to may a var by ref or pointer actually improve anything?
Does adding the function return type at the end with the -> actually make something better?
Could these things have been added to ObjC?

As of yet, I see little value added, all I see is syntax that's different but doesn't add any additional functionality to the language.

Some might think it makes it more readable, but it looks like it made for those that have never programmed before. Apple really should have focused on the millions of programmers that already know other popular languages.

This is really the same as spoken languages. Making a new spoken language at this point in the evolution of computing devices that doesn't offer a huge advantage isn't going to produce any better books. We've had great books written on old languages, just as much as we've had great programs written in old languages.

Moving the return type to the end of the function def isn't going to help anyone write a better program and is non standard and confusing to those that have learned popular languages.

It's almost like trying to be different for the sake of being different vs understanding the value of standardization.

In business, standardization has great value. Standardization of shipping containers had a huge effect on global shipping. Making thing non-standard without offering some advantage has the reverse effect.

Apple has a history of doing this, their PCs were non standard and never got huge market share. Developers couldn't run their programs on them and they had to be re written. Businesses used to have to purchase different software for them. Training was different. However, they at least had an advantage in some special work like desktop pub. In the end, they went with standardized CPU chips.

Not much different from the connector that Apple uses, not standard, no advantage to the industry.

They really should have focused on using knowledge base from popular languages.
 
Human programmers make errors, even if they think they don't. Swift syntax is designed to reduce the likelihood of some more typical programming errors (using the wrong type or accessing it the wrong way, etc.). Swift is also designed to make it easier to optimize execution and to do ARC memory management (which helps reduce the likelihood of coding leaks and mem faults). Thus the Swift compiler is likely to become faster and have less optimizing bugs than other compilers.

If you are not human, never code any bugs or any code paths that are non-optimal, and no one else ever needs to read and understand your code (including yourself a couple years later), then you don't need Swift.
 
  • Like
Reactions: varunsanthanam
Maybe I'm being overly harsh, but I just expected better. I'm going to continue watching these videos and give Swift a run because I might need a job using it and also if Apple forces ObjC out, I don't want to be left behind. Apple does have a history of dumping older products, so it wouldn't be the 1st time.
 
Objective-C isn't going away anytime soon. It takes a lot of work and time to abolish a computer language, never mind rewriting or replacing the gargantuan collection of frameworks involved with the today's platforms.

Swift has taken a serious place in the development community. I see it with developers I meetup with and in job postings. If you don't want to stall your Cocoa development career, you'll learn Swift. You don't want to automatically be passed up for a dream job opportunity because you don't know the new popular tool.

Sure there are some goofy and frustrating things but focus on and learn the advantages. Those are more interesting and will make the learning experience more enjoyable.
 
  • Like
Reactions: rafark
Swift is a pile of garbage. It took the worst of C++, Java, and Objective-C and threw it all together for no particular reason.

Apple heard that Objective-C sucked, couldn't figure out why, and decided to make a new language just so people wouldn't complain about Objective-C.

The biggest flaws with the language are:
- Way too much choice. They have Structs, Enums, and Classes, which are all sort of the same thing but with very subtle differences that probably won't matter 99% of the time.
- Syntax is optional. Should you include semicolons? Named parameters? ¯\_(ツ)_/¯
- Incredibally verbose. Take a look at the signatures of UIViewController methods. They spew onto 4 lines and it's impossible to tell which is which until you get halfway through the signature. The details that are important get completely buried under unimportant crap.

Seeing how badly Apple flopped with Swift made me feel compelled to write a new language which fixes all of the flaws of every language I've ever used.
 
Swift is a pile of garbage. It took the worst of C++, Java, and Objective-C and threw it all together for no particular reason.

Apple heard that Objective-C sucked, couldn't figure out why, and decided to make a new language just so people wouldn't complain about Objective-C.

The biggest flaws with the language are:
- Way too much choice. They have Structs, Enums, and Classes, which are all sort of the same thing but with very subtle differences that probably won't matter 99% of the time.
- Syntax is optional. Should you include semicolons? Named parameters? ¯\_(ツ)_/¯
- Incredibally verbose. Take a look at the signatures of UIViewController methods. They spew onto 4 lines and it's impossible to tell which is which until you get halfway through the signature. The details that are important get completely buried under unimportant crap.

Seeing how badly Apple flopped with Swift made me feel compelled to write a new language which fixes all of the flaws of every language I've ever used.

* Incredibly verbose

Your mixing language vs Framework. UIViewController is not part of the swift language, but part of the Cocoatouch framework, the methods are so long because the API is also shared by ObjC - of which the cocoa and cocoa touch APIs took their style from ( long verbose method names that can be easily understood - the code documents itself ).

The Swift language is actually quite compact, less verbose than Java and definitely ObjectiveC. More similar to say, Python or Ruby.

Personally, I like named parameters - help create more readable code.

--

Structs / Enums / Classes have their own needs - and are very distinct - its a matter of understanding each. Structs are more lightweight than Classes and can be used as a substitute - at the right time. Enums - Apple have made these too flexible IMO - allowing methods. Java allows methods its enums but many frown upon this. Enums are very different from Structs and classes, if your using enums to store data, define behaviour then this is a misuse.
 
Personally, I like named parameters - help create more readable code.

That's what you personally like, and that's the problem. You have completely irrelevant choices you can make which are unimportant in the actual execution of your code. It's going to be a choice that each individual will separately make, and it'll make sharing code more difficult because different people will be accustomed to different choices.

I like Python. Every parameter is named (save *args...) - it's up to the caller to decide whether to use it positionaly or by name (again, let's ignore * and ** which are more subtle special cases...)
 
I don't see much difference in Python with named parameters - I can use my own style too:

For example a method signature:
def some_method(first_param, second_param, third_param):

some_method(1,2,3)
some_method(1, second_param=2, third_param=3)

All of the above are completely valid.


some_method(first_param=1, 2,3)
The above call is incorrect.

In this regard, Python also is flexible, like Swift.

There are choices in style: that is why the majority of software companies have coding standards in an attempt to get more code looking consistent between each developer.

If your developing solo - use your own preferences and be consistent.

Sorry, but I just don't see named parameters as an issue. If working in a development team: A good team will be consistent and maintain its coding style. The Team Lead will enforce this.

That's what you personally like, and that's the problem. You have completely irrelevant choices you can make which are unimportant in the actual execution of your code. It's going to be a choice that each individual will separately make, and it'll make sharing code more difficult because different people will be accustomed to different choices.

I like Python. Every parameter is named (save *args...) - it's up to the caller to decide whether to use it positionaly or by name (again, let's ignore * and ** which are more subtle special cases...)
 
Last edited:
  • Like
Reactions: xStep
Readability of a language has at least two parts. Someone new to a language vs someone that's used if for many years.

Consider: English has all the basics and can be used to write a book for children. The same English language can be used for a legal or medical document. It has easy words and complex words.

If you read a lot you'll notice that some things use a lot of words to try and make a point. In my business com class, they teach you to start a report with a summary and an exception report. The reason for this is that a 30 page report is less likely to ever be read vs a 2 page exception report that cuts to the chase.

In business, time is money. You write code to get a computer to do something. You write a report to inform a manager of what they need to know so they can decide what action to take. You read a book to understand what the author is trying to say.

There's a balance between catering to children vs experts. If you cater a managers report to a 1st grade child, they'll lose focus and have to fight to get to the point and you've wasted time.

Compare:

Add(3,10);
vs
Add(firstNumberToAdd: 3, secondNumberToAdd: 10)

Compare:
The red car moves fast.

The color red object car hasAction moves hasRateOfMotion fast.

The way people learn is that the memorize things and they moved from "have to think about it" to "it comes natural".

A person that starts drives a stick shift car has to "think" about how to operate the clutch and gear shift. After a while, it becomes second nature and they no longer have to actively think about it.

Watch someone that's never driven a stick grind the gears vs someone that's done if for years.

Some of these "improvements" to make it more readable may have made it better for beginners, but has made it worse for those that are advanced.

In addition, if you've been around a good while, you'll know that most time is not spent in "trying to read" ... it's spent on trying to figure out "what they did wrong"

Catering a language to beginners vs mature users is short sighted. If the beginners are good, they'll become mature.
 
  • Like
Reactions: anthonyondre
As far as teams go, many elements of a program should end up as components where the code isn't seen, just used. Maybe in a code module or an API or as a drag-n-drop component such that you don't get into the code unless you need to change or understand something.

Consider components or objects we use now. Something like a button. You just use the button, you tell it what it needs to know. What to say, where to say it, what method to call when tapped. You don't need to know how it draws the corners or how it selects the colors, just that it works.

This is part of the advantage of OO programming. The user calls code using methods to get/set, etc... without knowing how something works under the hood.

If you wrote a "store data to XML file" ... You don't need to know all the fread/fwrite or byte conversions or whatever... If it's written well, you just give it the data and it stores the data in the proper format.

Think of it like sending a package via UPS. You package it, put on the label, give it to them. You don't need to know how many trucks are used or permits are required, you let them do that.

How often do we have to debug the APIs that Apple provides us? Not very often, that's the point. If a programmer is skilled, they'll write routines/objects/components that work and don't need to be debugged very often. So you end up writing at a higher level.

How many advanced books look like this:

the Noun car Verb moves Adverb very fast.

Maybe in grade school, but not in the professional world.
 
This is not unique to OO. Your describing an API, which could be implemented procedurally, OO etc. The component's implementation may even be in a different language. Component dependant, 'Your' code doesn't have to be OO either.

<snip api description>
This is part of the advantage of OO programming. The user calls code using methods to get/set, etc... without knowing how something works under the hood.
<snip>

Readability of a language has at least two parts. Someone new to a language vs someone that's used if for many years.

As far as most languages written in the last few decades, the readability of code is down to the developer. The core language gives the developer the necessary support to write readable code. Including Swift.

Don't confuse the Language with frameworks as per your example of UIViewController above.
 
Last edited:
This is not unique to OO. Your describing an API, which could be implemented procedurally, OO etc. The component's implementation may even be in a different language. Component dependant, 'Your' code doesn't have to be OO either.





As far as most languages written in the last few decades, the readability of code is down to the developer. The core language gives the developer the necessary support to write readable code. Including Swift.

Don't confuse the Language with frameworks as per your example of UIViewController above.
Actually I didn't use the example of UIViewController, someone else did, but I was speaking of a framework/API in general.

The point is the value and effectiveness of "readable" code.

Consider: an advanced math book could be made "readable" to a child or to a math professor. Making it readable to a child doesn't make it unreadable to a math professor, however, the math professor probably doesn't need to know what the '+' sign does.

Saying:
addNumbers(firstNumberToAdd: 6, secondNumberToAdd: 5)
VS
addNumbers(6,5)

Might help the child to read, but the math professor would see this as taking too long to read.

As I write this message on this forum, I assume people know a verb from a noun. Pointing out every verb and every noun would only be helpful to those that aren't advanced.

Computer programming has been around for a while, just like spoken languages. We don't cater spoken languages to children by making all writing using identifiers like verb and noun, it's assumed they know this. Forcing all writings to use this style doesn't help the advanced users. I know what a verb and noun is, I don't need to be reminded and don't want to spend the time reminding others.
 
I disagree generally. Readable code helps maintainable code. What seems obvious to you today, may not so in 2 months time. What seems obvious to you may not be obvious to other developers.

Descriptive class, method / variable naming etc are generally better than short / generic names.

* ignore the _ in variable names, I'm not referring to Swift specifically - but applicable to most languages.

i.e.,
x = 5 ( what is x and what does 5 refer to )
wt = kRed ( what is wt? kRed fixed: Better to use enum that some magic number )
tint = kRed ( what is the tint for ? window, border... ?)
wndw_tnt = kRed ( don't be lazy by excluding the vowels! )
window_tint = kRed ( more descriptive - easily understandable )


I much prefer code that documents itself -given your example

addNumbers(firstNumberToAdd: 6, secondNumberToAdd: 5)
VS
addNumbers(6,5)

Comparing the two - for me, they take equally the same time to read. addNumbers is a simple example.

Another example:
1. move_table_column(5,6).
or
2. move_table_column(fromIndex:5, toIndex:6)

Which do you prefer? - I certainly prefer (2). Without the named parameters & looking further at the code you wouldn't know what parameters 5 or 6 actually refer to.

I don't need to go looking at the 'move_table_column' method signature since the named parameters suffice - i.e., self documenting code.

With your maths book example, I really think your comparing Apple's vs Oranges.

On that note: in some languages it is possible to overload symbols such as '+' - so to change behaviour. So 1 + 1 may not equal 2. The + operator may do something different!

I personally like code that is easy to read and understand. I'm not interested that code may be a bit more verbose or takes a little more time to read. A competent developer won't actually write code that looks 'school grade like' as you describe( referencing you from above ). An inexperienced developer often write code that is difficult to understand, with inappropriate method, variable, class naming.

Code is not like a book that may be aimed at a particular skill level - instead Code should be understandable to everyone. Often, development teams contain a range of people, such as graduates, intermediates, senior devs. Additionally, people will be moved on and off a project and so quick ramp up is important - which good written code will aid.

Actually I didn't use the example of UIViewController, someone else did, but I was speaking of a framework/API in general.

The point is the value and effectiveness of "readable" code.

Consider: an advanced math book could be made "readable" to a child or to a math professor. Making it readable to a child doesn't make it unreadable to a math professor, however, the math professor probably doesn't need to know what the '+' sign does.

Saying:
addNumbers(firstNumberToAdd: 6, secondNumberToAdd: 5)
VS
addNumbers(6,5)

Might help the child to read, but the math professor would see this as taking too long to read.

As I write this message on this forum, I assume people know a verb from a noun. Pointing out every verb and every noun would only be helpful to those that aren't advanced.

Computer programming has been around for a while, just like spoken languages. We don't cater spoken languages to children by making all writing using identifiers like verb and noun, it's assumed they know this. Forcing all writings to use this style doesn't help the advanced users. I know what a verb and noun is, I don't need to be reminded and don't want to spend the time reminding others.
 
Last edited:
I've never understood why people hate verbose frameworks. ArtOfWarfare gave a good example of verbosity being bad, but it is usually a good thing. I've seen some brilliant programmers absolutely ruin great projects by trying to be too succinct. For example, the class NSManagedObjectContext is good because it is self explanatory. Some programmers who complain about verbosity might have called it NSManObjCon, which isn't explanatory at all and is quite confusing.

If your code is so cryptic that no one can read it, all in an attempt to make your code shorter, you aren't doing anyone a favor.
 
Old fashioned Basic used to support only very concise syntax, only 1 or 2 characters for variables and function names, and only numbers for subroutine names. I used to write code like that. Lots. Very concise.

Then I had to read and understand my own code several years later. Made me appreciate verbosity *much* more.
 
Honestly, it sounds like you haven't done much with Swift.

I didn't like a lot of the changes before I actually started using the language. As an example, getting rid of the ; at the end of lines sounded stupid to me. Then after a month of writing Swift code, when I would switch over to C# I would wonder why the hell I needed to type ; after every line.

It is hard when you are used to working in one language. Try to just embrace it and actually use it for a month or two and then judge it.
 
I think that's part of the problem, I'm coming from many other languages and have been programming for decades. Being forced to write something that looks like it belongs in a child's book and clutters up the code doesn't impress me.

I'm not saying that ObjC didn't anger me either, it's an odd language.

From a business perspective, I think Swift really missed the mark. In the end, it all comes down to the business of making software, either as an independent or as an employee, it either fits the business model or it doesn't.

They didn't capitalize on the massive knowledge base and the ease of use just isn't there. Maybe the goal was to bring in a new crop of entry level programmers and make it as easy to follow as they can.

Even worse, it's looking like they are leaning towards giving the advantages like playground to Swift and leaving ObjC behind.
 
What languages have you used in the past?
Pascal, Visual Objects, Delphi, SLAM, Cobol, C, C++, C#, Clipper (xBase), VB, SQL, Java, and now ObjC.

I don't mind learning languages, but I've spent decades in the business of providing software solutions both as freelance and employee and I always look long term. I try very hard to write code that never need to be modified later. I focus on bullet proof routines that someone else doesn't need to maintain.

One example was an import routine for American Express as a 3rd party logistics provider. Someone came in and modified my code and broke the whole thing because he thought he knew better. He wanted to prove he was smart, didn't understand what the code was doing because he wasn't a skilled systems analyst. He was a reasonable programmer, but had no clue about the design of the system and wanted to improve it, instead he crashed it and we nearly lost the account.

I later worked for a company that provided 3rd part logistics for Visa. Someone came in a modified some code that someone else wrote and crashed the system. They ended up bankrupt and sued over these mistakes.

Maybe this is why I'm against catering to people that need parameter labels like "inout"

If someone doesn't understand by ref vs by val parameters, they shouldn't be programming.

I see it like mandating training wheels on all bikes. Training wheels can be used to train, but if you can never remove the training wheels, it'll never be a bike, it'll alway be a 4 wheeled pedal thing.

We grow when we are challenged.
 
Thanks for replying - what languages you've used.

I try very hard to write code that never need to be modified later. I focus on bullet proof routines that someone else doesn't need to maintain.

This particular quote stuck out. Code will require changing at some point - the needs / requirements of the business change - therefore the software will need updating to reflect the changing business. No to mention, no developer is perfect, there will always be bugs.

Code has to be written in a way that it *can* be modified in the future.

Something that I'd expect any professional developer to know.

As for your example INOUT. I think its reasonable to mark a particular parameter INOUT if needed. This isn't unique to Swift - PL/SQL has a similar mechanism.

INOUT is telling the developer how to use the method ( as well as helping out the Swift compiler). Without looking at the method implementation ( this code may not be available or well documented ), it becomes clear that the particular parameter is going to be updated by reference. Without knowing this, a developer could well introduce bugs. Fortunately, the swift compiler (because INOUT is used ) will catch this as a compilation error if a variable isn't passed by reference ( i.e., someMethod(&variable) ). Development time saved by preventing a potential bugs.

It isn't about training wheels. Its code that documents itself, and helping to reduce bugs ( at compilation time in this case). This is helping the developer - of all abilities - to write reliable code first time round.

Developers are challenged enough by the process of building software alone ( often enjoyable aspect ) - shouldn't have to be challenged by deficiencies of a language - which often is not enjoyable .

Software development by nature is complex and certainly welcome anything that helps to build reliable software - whether it be features of a language, IDEs etc.
 
Last edited:
  • Like
Reactions: rafark and Mascots
From a business perspective, I think Swift really missed the mark. In the end, it all comes down to the business of making software, either as an independent or as an employee, it either fits the business model or it doesn't.

Perhaps the business perspective was to allow development of more reliable software. That was the big sell when they released Swift. Early adopters I know were sold on that, so I'd say it hit the mark.
 
  • Like
Reactions: Stella
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.