<body><script type="text/javascript"> function setAttributeOnload(object, attribute, val) { if(window.addEventListener) { window.addEventListener('load', function(){ object[attribute] = val; }, false); } else { window.attachEvent('onload', function(){ object[attribute] = val; }); } } </script> <div id="navbar-iframe-container"></div> <script type="text/javascript" src="https://apis.google.com/js/plusone.js"></script> <script type="text/javascript"> gapi.load("gapi.iframes:gapi.iframes.style.bubble", function() { if (gapi.iframes && gapi.iframes.getContext) { gapi.iframes.getContext().openChild({ url: 'https://www.blogger.com/navbar.g?targetBlogID\x3d10192368\x26blogName\x3dChrno\x26publishMode\x3dPUBLISH_MODE_BLOGSPOT\x26navbarType\x3dBLUE\x26layoutType\x3dCLASSIC\x26searchRoot\x3dhttp://mckev.blogspot.com/search\x26blogLocale\x3den_US\x26v\x3d2\x26homepageUrl\x3dhttp://mckev.blogspot.com/\x26vt\x3d7389753422818032331', where: document.getElementById("navbar-iframe-container"), id: "navbar-iframe" }); } }); </script>

Thursday, February 07, 2013

Is Enterprise Software an excuse for sloppy code?

By Greg Myers

If you follow the chatter on Twitter these days, there’s a newer hashtag flying around that is gaining in popularity, #EnSW, short for Enterprise Software. It helps people find other people discussing enterprise software and hopefully bringing like-minds together.

I’ve been thinking about enterprise software a lot these days. Mainly because its my job to help customers make sure they get everything they paid for with an investment in an enterprise software suite like SAP BusinessObjects Business Intelligence 4.0. I recommend how big the systems should be, help them build it, migrate their old content into it, and ultimately support and maintain it once that system becomes productive. BOBJ isn’t the only enterprise software I’ve maintained over the years. I’ve also worked with Microsoft SQL Sever, Hyperion Essbase (back when Hyperion still owned it), Oracle databases to some extent, and SAS, and various smaller non-SAP HRIS systems to name a few. And while most of these tools all have something to do with data and business analytics, they have something else in common, too. They’re all a bugger to work with from the implementation and administration side. So as this current conversation on Twitter continues to evolve and unfold, I started to wonder what enterprise software means to me.

I’ve been told several times over the years, when talking with various peers or even vendor support staff, that enterprise software is meant to be hard. That somehow, a massive implementation and support effort are etched into the phrase “Enterprise Software” somehow. I had a manager many years ago tell me jokingly “If this stuff was easy, we wouldn’t need you, Greg.”

But does Enterprise Software really need to be so complicated? Sure it needs to be complex in how it operates and robust in the features it provides. That’s really why we buy it, right? We need these large suites to run our companies. But does the fact that the feature set is large, internal operations are complex, make a valid excuse for sloppy integration, buggy code, and frustrating documentation? I know a lot of people think it does.

But I disagree.

If I had a dollar for every time I was told that a less-than-intuitive feature or workflow inside of an enterprise software application was okay, because it was documented that way, I’d be retired by now. Does documenting a kluge justify its existence in the first place?

Think for a moment, if we applied Steve Jobs’ design principles to enterprise software. Imagine if the code, and the integration of all of the various components that make up an enterprise software suite was sleek and clean in the user interface, but was equally as beautiful in how it worked under the hood. Imagine a suite where everything had a unified theme, that a feature or component was consistently named throughout the system. A place where error messages, if they ever did happen, made sense in plain language. Imagine a fast, clean installation process.

Part of the reason we have Enterprise Software the way it is is political. And I’d even go so far as to say the political factors are really two distinct problems.

First, companies that produce Enterprise Software tend to be large, and by the nature of being large, there exist ‘silos of power’. The software suite is too large to be developed by any one team, and the different teams tend to report to different managers, and even be in different geographic locations. Therefore, software is developed in independent silos with little or no communication between the silos until it is time for integration. By the time a project reaches integration, it is too late for major changes or even code reviews. It’s time to jam it together and make it work no matter how bad it looks. Put some lipstick on it and shove it out the door.

Second, companies that produce Enterprise Software are extremely reluctant to innovate on any type of grand scale. While they may put out something totally new, they continue to have to support years (sometimes decades) of legacy code to continue to pull along. There is often a huge reluctance to throw the old into the recycle bin and start over from scratch using newer, more viable technologies. Think about how bloated Microsoft Windows was over the years, until they finally seemed to ‘get it’ with Windows 7.

Another part of the reason we have Enterprise Software the way it is is our own fault. We buy it. Making a purchase of such code, in essence, condones it in the state it is in, which often is less than perfect. Tough stance, I understand. We often very much need the software we buy to run our business, and the choices aren’t always there to take a stand like this. But the fact remains. We buy it, so they keep making it.

I certainly realize that an enterprise software suite is not an iPad application. But what if it followed the same principles as one?

* Easy to download and install
* Simple, clean, beautiful user experience
* Consistent
* Easy to maintain
Doesn’t that sound like an enterprise software suite you’d like to use? It sure does to me.

The enterprise software market is in for some changes in the years ahead. As people who grew up using computers, and even now grew up using smart phones and tablets enter the workforce, and begin to take positions where they have purchase authority, the ‘rules’ about what is acceptable software are going to change. Just think about how we work has changed in the last 30 years. 30 years ago, in a typical corporate office setting, there were almost no computers at all. Maybe a few very large ones in a “Computer Room”. 30 years ago people laughed at Bill Gates when he said he thought there would be a computer in every home in America.

In order to compete, or even survive in the future, enterprise software vendors are going to have to be mindful of the changing quality and experience requirements of the upcoming consumer generation. (Some of us have these expectations already). Status Quo isn’t going to cut it.


Ref: http://evtechnologies.com/is-enterprise-software-an-excuse-for-sloppy-code/

Wednesday, December 05, 2012

C++ in Coders at Work

C++ fascinates me—it’s obviously a hugely successful language: most “serious” desktop apps are still written in C++ despite the recent inroads made by Objective C on OS X and perhaps some C# on Windows; the core of Google’s search engine is written in C++; and C++ dominates the games industry. Yet C++ is also frequently reviled both by those who never use and by those who use it all the time.

That was certainly reflected in the responses I got from my Coders interviewees when I asked them about it. Jamie Zawinski, as I’ve discussed recently, fought tooth and nail to keep C++ out of the Netscape code base (and eventually lost). Some of that was due to the immaturity of C++ compilers and libraries at the time, circa 1994, but it seems also to have to do with his estimation of the language as a language:
"C++ is just an abomination. Everything is wrong with it in every way. So I really tried to avoid using that as much as I could and do everything in C at Netscape."

Part of Zawinski’s issue with C++ is that it is simply too complex:
"When you’re programming C++ no one can ever agree on which ten percent of the language is safe to use. There’s going to be one guy who decides, “I have to use templates.” And then you discover that there are no two compilers that implement templates the same way."

Note that Zawinski had started his career as a Lisp programmer but also used C for many years while working on Netscape. And he later enjoyed working in Java. So it’s not that C++ was either too high-level or too low-level for him or that he couldn’t wrap his head around object orientation.
Joshua Bloch, who also hacked low level C code for many years before becoming a big-time Java head, told me that he didn’t get into object-oriented programming until quite late: “Java was the first object-oriented language I used with any seriousness, in part because I couldn’t exactly bring myself to use C++.” He echoed Zawinski’s point about how C++ forces programmers to subset the language:
"I think C++ was pushed well beyond its complexity threshold and yet there are a lot of people programming it. But what you do is you force people to subset it. So almost every shop that I know of that uses C++ says, “Yes, we’re using C++ but we’re not doing multiple-implementation inheritance and we’re not using operator overloading.” There are just a bunch of features that you’re not going to use because the complexity of the resulting code is too high. And I don’t think it’s good when you have to start doing that. You lose this programmer portability where everyone can read everyone else’s code, which I think is such a good thing."

Ken Thompson, who still mostly uses C despite working at Google which is largely a C++ shop, has had as long an exposure to C++ as just about anyone, having worked with with Bjarne Stroustrup, C++’s inventor, at Bell Labs:
"I would try out the language as it was being developed and make comments on it. It was part of the work atmosphere there. And you’d write something and then the next day it wouldn’t work because the language changed. It was very unstable for a very long period of time. At some point I said, no, no more. In an interview I said exactly that, that I didn’t use it just because it wouldn’t stay still for two days in a row. When Stroustrup read the interview he came screaming into my room about how I was undermining him and what I said mattered and I said it was a bad language. I never said it was a bad language. On and on and on. Since then I kind of avoid that kind of stuff."

At that point in the interview I almost changed the topic. Luckily I took one more try at asking for his actual opinion of C++. His reply:
"It certainly has its good points. But by and large I think it’s a bad language. It does a lot of things half well and it’s just a garbage heap of ideas that are mutually exclusive. Everybody I know, whether it’s personal or corporate, selects a subset and these subsets are different. So it’s not a good language to transport an algorithm—to say, “I wrote it; here, take it.” It’s way too big, way too complex. And it’s obviously built by a committee. Stroustrup campaigned for years and years and years, way beyond any sort of technical contributions he made to the language, to get it adopted and used. And he sort of ran all the standards committees with a whip and a chair. And he said “no” to no one. He put every feature in that language that ever existed. It wasn’t cleanly designed—it was just the union of everything that came along. And I think it suffered drastically from that."

Brendan Eich, the CTO of the Mozilla Corporation, whose Mozilla browser is written almost entirely in C++, talks about “toe loss due to C and C++’s foot guns” and when I asked him if there are any parts of programming that he doesn’t enjoy as much as he used to, he replied:
"I don’t know. C++. We’re able to use most of its features—there are too many of them. It’s probably got a better type system than Java. But we’re still screwing around with ’70s debuggers and linkers, and it’s stupid. I don’t know why we put up with it."

At least among my interviewees, even the most positive comments about C++ tended to fall in the category of “damning with faint praise”. I asked Brad Fitzpatrick, who used C++ in college and again now that he’s at Google, whether he likes it:
"I don’t mind it. The syntax is terrible and totally inconsistent and the error messages, at least from GCC, are ridiculous. You can get 40 pages of error spew because you forgot some semicolon. But—like anything else—you quickly memorize all the patterns. You don’t even read the words; you just see the structure and think, “Oh, yeah, I probably forgot to close the namespace in a header file.” I think the new C++ spec, even though it adds so much complexity, has a lot of stuff that’ll make it less painful to type—as far as number of keystrokes. The auto variables and the for loops. It’s more like Python style. And the lambdas. It’s enough that I could delude myself into thinking I’m writing in Python, even though it’s C++."

Dan Ingalls, who helped invent modern object oriented programming as part of Alan Kay’s team that developed Smalltalk, never found C++ compelling enough to use but isn’t totally adverse to using it:
"I didn’t get that much into it. It seemed like a step forward in various ways from C, but it seemed to be not yet what the promise was, which we were already experiencing. If I had been forced to do another bottom-up implementation, instead of using machine code I would’ve maybe started with C++. And I know a couple of people who are masters of C++ and I love to see how they do things because I think they don’t rely on it for the stuff that it’s not really that good at but totally use it as almost a metaprogramming language."

Joe Armstrong, similarly, has never felt the need to learn C++:
"No, C++, I can hardly read or write it. I don’t like C++; it doesn’t feel right. It’s just complicated. I like small simple languages. It didn’t feel small and simple."

And finally Guy Steele, who probably knows more about more languages than anyone I interviewed (or possibly anyone, period), has also not been drawn to C++. But he did go out of his way to try to say something nice about Stroustrup’s effort:
"I have not been attracted to C++. I have written some C++ code. Anything I think I might want to write in C++ now could be done about as well and more easily in Java. Unless efficiency were the primary concern. But I don’t want to be seen as a detractor of Bjarne Stroustrup’s effort. He set himself up a particular goal, which was to make an object-oriented language that would be fully backwards-compatible with C. That was a difficult task to set himself. And given that constraint, I think he came up with an admirable design and it has held up well. But given the kinds of goals that I have in programming, I think the decision to be backwards-compatible with C is a fatal flaw. It’s just a set of difficulties that can’t be overcome."

Obviously with only fifteen interviewees in my book I have only a sampling of possible opinions. There are great programmers who have done great work with C++ and presumably at least some of them would have had more enthusiastic things to say about it if I had spoken with them. But this is what I heard from the people I spoke with.


Ref: http://gigamonkeys.wordpress.com/2009/10/16/coders-c-plus-plus/

Wednesday, September 05, 2012

50 Peaceful Things

Source: http://tinybuddha.com/blog/50-peaceful-things/

“Peace is not something you wish for. It’s something you make, something you do, something you are, and something you give away.” ~Robert Fulghum

Here are 50 peaceful things to help you be mindful and happy throughout the day:
1. Laying in bed for a few minutes in the morning before hopping into your day. There’s no reason to rush.
2. Eating breakfast slowly, at a table, instead of grabbing something on the go.
3. Listening to your favorite music on the way to work, and remembering when you first heard it. Where you were, who you were with, how you felt.
4. Hugging someone you know long enough to make it meaningful.
5. Appreciating something you take for granted, like your feet for taking you where you need to go.
6. Focusing solely on the smell of your coffee as it brews.
7. Noticing something thoughtful a stranger does for someone else. (There are a lot of beautiful people out there).
8. Watching a coworker get proud about doing something well and feeling happy for them. Nothing’s more calming than focusing on someone else and forgetting yourself for a while.
9. Getting into the zone typing, like finger-moving meditation, maybe set the rhythm of a great tune on your iPod.
10. Doing only one thing, even though you have a lot to do, to fully enjoy what you’re doing.
11. Knowing you did a good job and taking a few minutes to bask in self satisfaction. You’re pretty awesome.
12. Expressing how you feel and then letting it be without feeling pressure to explain (pressure we usually put on ourselves).
13. Taking a break without anything to do besides breathing and noticing little details in your environment. How soft the rug is after having been cleaned. How sunlight from your window leaves shadows on your desk.
14. Holding someone’s hand in both of yours when you thank them.
15. Listening to someone talk–really hearing them–without thinking about what you’ll say next.
16. Remembering a time when you felt peaceful, and going back there in your head.
17. Writing a thoughtful, hand-written note to someone, even if you could email, because you feel more connected when you write it out.
18. Channeling your inner Kevin Rose and savoring a cup of loose leaf tea.
19. Forgiving someone, not just in words, but by feeling compassion for them.
20. Writing down thoughts that keep racing through your head, crumpling up the paper, and throwing it away. Being done with them.
21. Letting yourself have lunch without any thoughts of work.
22. Doing something slowly and finding it more fun than you realized when you rushed through it.
23. Holding a smooth rock in your palm and feeling stable and grounded.
24. Believing someone else when they say everything will be OK.
25. Feeling whatever you feel without judging it, knowing it will pass. It always does.
26. Making a short video of your child or niece, and watching it in the middle of the day when the world seems to be moving too fast.
27. Watching something in nature and letting yourself be intrigued. Feeling wonder at something simple that man hasn’t touched or changed.
28. Finding something beautiful in chaos, like the love between your loud family members at the dinner table, or one raindrop dripping down your window as you navigate a traffic-congested road.
29. Thinking something and realizing you can change your thoughts whenever you want. You don’t have to dwell in a painful memory–you can make a better one right now.
30. Telling someone you love them, not because you want to hear it back, but because you feel it too deeply not to express it. Because expressing it makes you happy.
31. Realizing there’s nothing to worry about. You can be happy right now–you have everything you need to smile.
32. Doing something creative and childlike, like making someone a card or coloring. Even as an adult, it feels good to pick all the right colors and stay mostly in the lines. Or go out of the lines and embrace it. It’s your picture!
33. Giving someone you love the benefit of the doubt to put your mind at ease and maintain a peaceful relationship.
34. Rolling down the window when you drive and feeling the pressure of the cool air on your face.
35. Calling one of your parents in the middle of the day to thank them for everything they’ve done–everything they’ve given you that one crazy afternoon can’t diminish or take away.
36. Taking a walk with no destination in mind, just to see what’s out there to be seen.
37. Letting go of something you’ve been holding onto that does nothing but stress you out.
38. Telling someone why knowing them makes you lucky.
39. Letting someone have their opinion; knowing you can honor it without changing or compromising yours.
40. Setting out on a joy mission–looking for something to do solely to experience fully present, open-to-possibilities bliss.
41. Defining peaceful for yourself. If peace is yelling, “I’m the king of the world!” while jogging around a track, do it with abandon.
42. Listening to a song that gives you goosebumps and creating a mental montage of moments that made you happy.
43. Turning off all your electronics to read without distractions.
44. Doing something by candlelight and remembering a simpler time.
45. Closing your eyes and dancing to a song you can feel pulsating in your veins.
46. Turning off your cell phone, no matter who might call or text, because there’s something you’d like to do with all your heart and attention.
47. Sitting in a sauna, and letting the heat melt all your stresses away.
48. Finally making time for something you want to do but always say you don’t have time for.
49. Making eye contact with a stranger and feeling connected to a world larger than your own.
50. Letting yourself lay in bed at night without making a mental inventory of things that went wrong today or could go wrong tomorrow.
And one last peaceful thing: being grateful for new friends with awesome ideas, and letting them inspire you.

Tuesday, September 04, 2012

Stages of Organizational Development

Source: http://www.centerod.com/2012/02/3-stages-organizational-development/

By understanding a simple model of three stages of organizational growth, organizations can design themselves to move beyond chaos to high performance. Most organizations experience chaos. In fact, a complete absence of chaos would mean that an organization could not respond to changing demands, a sure prescription for stagnation and death. Nevertheless, chaos that immobilizes an organization and results in its inability to respond effectively to the demands of the environment is unproductive and should be minimized if an organization is to succeed. This article presents a simple model that describes three stages of organizational growth and development—from chaos to stability to high performance. It also outlines some of the initiatives which leaders can take to move beyond chaos and eventually to high performance.

Stage III: High Performance (Outstanding, sustainable results)
• Clear statement of mission that creates sense of esprit de corp.
• Well defined values which result in distinctive culture
• Respect for people that is a deeply ingrained part of culture
• Good communication and information sharing systems
• High involvement and empowerment of people
• Design (work flow, structure, systems) that supports mission and values

Stage II: Stability (Back to the Basics)
• Clarity of goals and direction
• Consistency in priorities
• Well-defined policies and procedures (technical and personnel)
• Agreement on roles and responsibilities
• Basic management processes rewarded and practiced (goal-setting, performance reviews, etc.)

Stage I: Chaos (Fire-Fighting Mentality)
• Crisis/short-term focus
• Lack of clear direction and goals
• Shifting priorities
• Unclear policies and procedures
• “Us” vs. “them” attitude
• Blame and lack of ownership
• Alienated work force


CHAOS

The chaotic organization operates on the fringes of being out of control. It is problem-oriented. People are reactive and manage by attending to the pressure of the moment. Expectations, policies, standards, etc., are unclear, not agreed upon or poorly enforced. Good ideas and intentions abound, but there is not enough unity, commitment or follow-through to carry them out. Work is unpleasant for most individuals. People act out of self-protection by blaming and criticizing others, and hence, set up a climate that perpetuates fear, suspicion, hostility, and frustration. The problems of the chaotic organization are the lack of routine, lack of clarity, and hence, anxiety about what to expect from moment to moment. Needed are more formalized structures, routines, accountability, and clarification of policies, expectations, and roles.

STABILITY

The stable organization is characterized by predictability and control. Structure, routine, policies, etc., have been established to remove uncertainty from the environment. Goals are clear and people understand who is responsible for what. The major focus of the organization is to ensure an efficient daily operation. People within this climate tend to be dutiful and expect fairness. Conformity is the watch word, and people are rewarded for compliance rather than risk-taking and innovation. The purpose of the organization is subservient to its efficiency. The limitation of an organization that fails to grow beyond stability is that efficiency is more important than innovation and development. Doing things by the book and following the procedures becomes more important than the purpose and mission of the organization. Such companies are eventually left behind as customers find more responsive competitors. Needed are a long-term vision, emphasis on growth and development and a culture in which people exercise greater autonomy in making decisions and solving problems.

HIGH PERFORMANCE

The essence of high performance is shared ownership. Employees are partners in the business and assume responsibility for its success. These organizations are highly participative and collaborative. Their members have extensive decision-making and problem-solving responsibilities. Line of site is on serving the customer rather than the formal organizational structure. The mission of the organization, rather than rules and policies, guides day-to-day decision-making. Such an organization is founded on a unique and strong culture derived from a clear set of values expressed and reinforced by its leaders. Those values provide focus on what is important while allowing flexibility and innovation. The processes, systems and structure of the organization are designed to be in alignment or harmony with the values of the organization. The high performance organization adopts a long-term point of view. The development of people is seen as a primary management task. Trust and cooperation exist among organization members. People don’t blame or attack others because doing so is not in their own best interest.
An important learning from this model is that an organization cannot become high performing without a foundation of stability. Ironically, high performance requires not only participation, flexibility, and innovation, but order, predictability and control. The leaders of many an organization have attempted to grow from chaos to high performance without the underlying foundation of stability and consequently failed or been frustrated in their efforts. Leaders who want to create high performance work systems must be certain that they implement processes that ensure stability as well.

INITIATIVES TO CREATE A CLIMATE OF STABILITY

Creating stability has to do with getting back to the basics of good, sound management practices. Consider that the first step a good sports coach will take when his team is floundering is reinforcing the fundamentals: blocking and tackling; motion and passing. Likewise, senior managers within a chaotic, floundering organization need to get back to the fundamentals of good management by creating structure and order. There are two paths to structure and order. One, harmful in the long-run and contrary to a high performance philosophy, is “control” (directing and telling) which represents a short-term, knee jerk response to symptoms rather than root causes. The second and more productive path to stability is “clarity”; clarity of direction, goals and priorities; clarity of roles, responsibilities and performance expectations; clarity and documentation of processes and procedures. Clarity communicates the boundaries within which people do their work and make decisions. It doesn’t rob them of their responsibility but establishes the rules of success. The consequence is structure and order that form the foundation of a strong organization.

INITIATIVES TO CREATE A CLIMATE OF HIGH PERFORMANCE

Although there are many aspects of high performance, it begins by defining an inspiring ideology which consists of the deepest beliefs and values of the leaders of the organization. An ideology, thoughtfully developed and implemented, establishes the attitudes and habits of people throughout the organization and forms the boundaries within which people make decisions and conduct themselves in their relationships with others.
An ideology must be translated into a way of life reinforced by the entire the infrastructure of the organization. Core business processes, policies and procedures, layout and use of facilities, reporting relationships, information-sharing, planning, recruiting and selection, training, compensation, and so on, must be aligned with the ideology and strategy of the business. Such alignment results in dramatic improvements in quality, cycle time, productivity and employee commitment.
Another aspect of a high performance organization is that people are deeply valued. Decision-making and problem-solving are pushed to as low a level as possible. Problems are solved when and where they occur. Jobs are enriched so people have the authority, training and support to do whole and complete tasks. Such empowerment, however, does not happen by decree. It is a process which must be charted by an organizations leaders. This includes specifying the boundaries within which teams of people will work, identifying the tasks and responsibilities for which people should be accountable, designating leadership roles within teams, developing a time-line for taking on new roles, and providing the information, training, and resources needed for people to be successful. As this transfer of responsibility occurs, the motivation of organization members changes from mere compliance to commitment and a genuine desire to contribute.

SUMMARY

There is no magic in moving beyond chaos. There are no simple formulas. Real organizational development requires commitment and hard work. However, for those who want to eliminate waste, improve quality, provide better customer service there are powerful initiatives that can lead to a foundation of organizational stability and eventually high performance.

Wednesday, April 25, 2012

Objective C and C++

C is the basic of all programming language. It's very close to assembly language. That's why you see a lot of pointers manipulation inside >:).

C++ (created by Bjarne Stroustrup) and Objective C (created by NextStep) are the result when peoples trying to manage C complexity on large projects. One thing we can do to manage complexity of a large project is to breakdown it into several independent objects. Just like when you are constructing a bridge, you want to breakdown the bridge into several independent pieces like deck, bearings, drainage, joints, etc. Each object should be as independent as possible with each other, just like you wouldn't want to have joints system intermixed with drainage system, would you? That's the idea of "Object" in computer programming.

From this: The main difference between the two is in typing: static vs. dynamic. It gets rather philosophical and I find it fascinating in that sense. Static typing (C++) assumes that the world can be categorized (abstracted) perfectly. In other words, categorization is assumed to be inherent in nature. If it fails, it means you made a mistake in understanding the underlying structure of the universe. (This is analogous to Structuralism in the modern philosophy, like Noam Chomsky.). Dynamic typing assumes that categorization is never perfect because it is an order that we humans impose on nature. As such, the flaws are unavoidable. By leaving the typing dynamic (by leaving the definitions of objects as dynamic as possible until run time), Objective-C is able to accommodate situations that do not fit neatly into predefined categories. These situations do come up often in real life situations.

Here's the links on how to program Objective C and C++ if you already familiar with C:

Read this articles too to keep things into perspective:

Tuesday, April 03, 2012

Brian's Ten Rules for Writing Cross Platform 'C' Code

Source: here

Introduction:

I've had a lot of success in my 20 year software engineering career with developing cross platform 'C' and 'C++' code. Most recently, at Backblaze we develop an online backup product where a small desktop component (running on either on Windows or Macintosh) encrypts and then transmits user's files across the internet to our datacenters (running Linux) in San Francisco, California. We use the same 'C' and 'C++' libraries on Windows, Mac, and Linux interchangeably. I estimate it slows down software development by about 5 percent overall to support all three platforms. However, I run into other developers or software managers who mistakenly think cross platform code is difficult, or might double or triple the development schedules. This misconception is based on their bad experiences with badly run porting efforts. So this article is to quickly outline the 10 simple rules I live by to be achieve efficient cross platform code development.


The Target Platforms: 1) Microsoft Windows, 2) Apple Macintosh, and 3) Linux.

The concepts listed here apply to all platforms, but the three most popular platforms on earth right now are Microsoft Windows ("Windows" for short), Apple Macintosh ("Mac" for short), and Linux. At Backblaze, we deliver the user-installed desktop component of our system to desktops running Windows and Mac, and our datacenter all runs Linux. We use the same 'C' and 'C++' libraries on all three platforms interchangeably by following the 10 simple rules below.

One thing I'd like to make clear is that I always believe in using the most popular (most highly supported) compiler and environment on each platform, so on Windows that's Microsoft Visual Studio, on Apple Macintosh that is Xcode, and on Linux it is GCC. It wouldn't be worth it to write cross platform code if you had to use a non-standard tool on a particular platform. Luckily you can always use the standard tools and it works flawlessly.


Why Take the Extra Time and Effort to Implement Cross-Platform?

Money! :-) At Backblaze we run Linux in our datacenter because it's free (among other reasons), and every penny we save in datacenter costs is a penny earned to Backblaze. At the same time, 90+ percent of the world's desktops run Windows, and to sell into that market we need to offer them a Windows product. Finally, virtually all of the remaining desktops run Apple Macintosh, and an increase of 10 percent of revenues to Backblaze is the difference between a "slightly profitable business" and a "massively profitable business".

Another reason to implement cross-platform is that it raises the overall quality of the code. The compilers on each platform differ slightly, and can provide excellent warnings and hints on a section of code that "compiled without error" on another platform but would crash in some cases. The debugger and run-times also differ on each platform, so sometimes a problem that is stumping the programmer in Microsoft Visual Studio on Windows will show its base cause easily and quickly in Xcode on the Macintosh, or vice versa. You also get the benefit of the tools available on all of the target platforms, so if gprof helps the programmer debug a performance issue then it is a quick compiler flag away on Linux, but not readily available on Windows or Xcode.

If the above paragraph makes it sound like you have to become massively proficient in all development environments, let me just say we can usually teach a Windows centric programmer to navigate and build on the Mac or Linux in less than an hour (or vice versa for a Macintosh centric programmer). There isn't any horrendous learning curve here, programmers just check out the source tree on the other platforms and select the "Build All" menu item, or type "make" at the top level. Most of the build problems that occur are immediately obvious and fixed in seconds, like an undefined symbol that is CLEARLY platform specific and just needs a quick source code tweak to work on all platforms.


So onto the 10 rules that make cross platform development this straight-forward:

Rule #1: Simultaneously Develop - Don't "Port" it Later, and DO NOT OUTSOURCE the Effort!!

When an engineer designs a new feature or implements a bug fix, he or she must consider all the target platforms from the beginning, and get it working on all platforms before they consider the feature "done". I estimate that simultaneously developing 'C' code across our three platforms lengthens the development by less than 5 percent overall. But if you developed a Windows application for a full year then tried to "port" it to the Mac it might come close to doubling the development time.

To be clear, by "simultaneously" I mean the design takes all target platforms into account before coding even starts, but then the 'C' code is composed and compiled on one platform first (pick any one platform, which ever is the programmer's favorite or has better tools for the task at hand). Then within a few hours of finishing the code on the first platform it is compiled, tested, touched up, and then finished on all the other platforms by the original software engineer.

There are several reason it is so much more expensive to "Port A Year Later". First of all, while a programmer works on a feature he or she remembers all the design criteria and corner cases. By simultaneously getting the feature working on two platforms the design is done ONCE, and the learning curve is done ONCE. If that same programmer came back a year later to "port" the feature the programmer must re-acquaint themselves with the source code. Certain issues or corner cases are forgotten about then rediscovered during QA.

But the primary reason it is expensive to "Port A Year Later" is that programmers take short cuts, or simply through ignorance or lack of self control don't worry about the other platforms. A concrete example is over-use of the Windows (in)famous registry. The Windows registry is essentially an API to store name-value pairs in a well known location on the file system. It's a perfectly fine system, but it's approximately the same as saving name-value pairs in an XML file to a well known location on the file system. If the original programmer does not care about cross platform and makes the arbitrary decision to write values into the registry, then a year later the "port" must re-implement and test that code from scratch as XML files on the other platforms that do not have a registry. However, if the same original programmer thinks about all target platforms from the beginning and chooses to write an XML file, it will work on all platforms without any changes.

Finally, the WORST thing you can possibly do is outsource the port to another company or organization. The most efficient person to work on any section of code is the original programmer, and the most efficient way to handle any (small) cross-platform issues is right inline in the code. By outsourcing the port you must deal with communication issues, code merges a year later and the resulting code destabilization, misaligned organizational goals, mis-aligned schedules (the original team is charging ahead while the port team is trying to stabilize their port) etc, etc. Outsourcing any coding task is almost always a mistake, outsourcing a platform port is a guaranteed disaster.


Rule #2: Factor Out the GUI into Non-Reusable code - then Develop a Cross-Platform Library for the Underlying Logic

Some engineers think "Cross-Platform" means "least common denominator programs" or possibly "bad port that doesn't embrace the beauty of my favorite platform". Not true! You should NEVER sacrifice a single bit of quality or platform specific beauty! What we're shooting for is the maximum re-use of code WITHOUT sacrificing any of the end user experience. Towards that end, the least re-useable code in most software programs is the GUI. Specifically the buttons, menus, popup dialogs or slide down panes, etc. On Windows the GUI is probably in a ".rc" Windows specific resource file laid out by the Visual Studio dialog editor. On the Mac the GUI is typically stored in an Apple specific ".xib" laid out by Apple's "Interface Builder". It's important to embrace the local tools for editing the GUI and just admit these are going to be re-implemented completely from scratch, possibly even with some layout changes. Luckily, these are also the EASIEST part of most applications and done by dragging and dropping buttons in a GUI builder.

But other than the GUI drawing and layout step that does not share any code, much of the underlying logic CAN be shared. Take Backblaze as a concrete example. Both the Mac and PC have a button that says "Pause Backup" which is intended to temporarily pause any backup that is occurring. On the Mac the "Pause Backup" button lives in an Apple "Pref Pane" under System Preferences. On Windows the button lives in a dialog launched from the Microsoft System Tray. But on BOTH PLATFORMS they call the same line of code -> BzUiUtil::PauseBackup(); The BzUiUtil class and all of the functions in it are shared between the two implementations because those really can be cross platform, both platforms want to stop all the same processes and functions and pause the backup.

Furthermore, if at all possible you should factor the GUI code all the way out into it's own stand-alone process. In Backblaze's case, the GUI process (called "bzbui") reads and writes a few XML files out to the filesystem instructing the rest of the system how to behave, for example which directories to exclude from backup. There is a completely separate process called "bztransmit" that has no GUI (and therefore can be much more cross-platform) which reads the XML configuration files left by the "bzbui" GUI process and knows not to transmit the directories excluded from backup. It turns out having a process with no GUI is a really good feature for a low level service such as online backup, because it allows the backups to continue when the user is logged out (and therefore no GUI is authorized to run).

Finally, notice that this design is really EASY (and solid, and has additional benefits) but only if you think about cross platform from the very beginning of a project. It is much more difficult if we ignore the cross platform design issues at first and allow the code to be peppered randomly with platform specific GUI code.


Rule #3: Use Standard 'C' types, not Platform Specific Types

This seems painfully obvious, but it's one of the most common mistakes that lead to more and more code that is hard to fix later. Let's take a concrete example: Windows offers an additional type not specified in the 'C' language called DWORD which is defined by Microsoft as "typedef unsigned long DWORD". It seems really obvious that using the original 'C' type of "unsigned long" is superior in every way AND it is cross platform, but it's a common mistake for programmers embedded in the Microsoft world to use this platform specific type instead of the standard type.

The reason a programmer might make this mistake is that the return values from a particular operating system call might be in a platform specific type, and the programmer then goes about doing further calculations in general purpose source code using this type. Going further, the programmer might even declare a few more variables of this platform specific type to be consistent. But instead, if the programmer immediately switches over to platform neutral, standard 'C' variables as soon as possible then the code easily stays cross platform.

The most important place to apply this rule is in cross platform library code. You can't even call into a function that takes a DWORD argument on a Macintosh, so the argument should be passed in as an unsigned long.


Rule #4: Use Only Built In #ifdef Compiler Flags, Do Not Invent Your Own

If you do need to implement something platform specific, wrap it in the 100 percent STANDARD #ifdefs. Do not invent your own and then additionally turn them off and on in your own Makefiles or build scripts.

One good example is that Visual Studio has an #ifdef compiler flag called "_WIN32" that is absolutely, 100 percent present and defined all the time in all 'C' code compiled on Windows. There is NO VALID REASON to have your own version of this!! As long as you use the syntax as follows:

#ifdef _WIN32
// Microsoft Windows Specific Calls here
#endif

then the build will always be "correct", regardless of which Makefiles or Visual Studio you use, and regardless of if an engineer copies this section of code to another source file, etc. Again, this rule seems obvious, but many (most?) free libraries available on the internet insist you set many, MANY compiler flags correctly, and if you are just borrowing some of their code it can cause porting difficulties.


Rule #5: Develop a Simple Set of Re-useable, Cross-Platform "Base" Libraries to Hide Per-Platform Code

Let's consider a concrete example at Backblaze, the "BzFile::FileSizeInBytes(const char *fileName)" call which returns how many bytes are contained in a file. On a Windows system this is implemented with a Windows specific GetFileAttributesEx(), and on linux this is implemented as a lstat64(), and on the Macintosh it uses the Mac specific call getattrlist(). So *INSIDE* that particular function is a big #ifdef where the implementations don't share any code. But now all callers all over the Backblaze system can call this one function and know it will be very fast, efficient, and accurate, and that it is sure to work flawlessly on all the platforms.

In practice, it takes just a very short amount of time to wrap common calls and then they are used HUNDREDS of times through out the cross platform code for great advantage. So you must buy off on building up this small, simple set of functionality yourself, and this will slow down initial development just a little bit. Trust me, you'll see it's worth it later.


Rule #6: Use Unicode (specifically UTF-8) for All APIs

I don't want this to become a tutorial on Unicode, so I'll just sum it up for you: Use Unicode, it is absolutely 100 percent supported on all computers and all applications on earth now, and specifically the encoding of Unicode called UTF-8 is "the right answer". Windows XP, Windows Vista, Macintosh OS X, Linux, Java, C#, all major web browsers, all major email programs like Microsoft Outlook, Outlook Express, or Gmail, everything, everywhere, all the time support UTF-8. There's no debate. This web page that you are reading is written in UTF-8, and your web browser is displaying it perfectly, isn't it?

To give Microsoft credit, they went Unicode before most other OS manufacturers did when Windows NT was released in 1993 (so Microsoft has been Unicode for more than 15 years now). However, Microsoft (being early) chose for their 'C' APIs the non-fatal but unfortunate path of using UTF-16 for their 'C' APIs. They have corrected that mistake in their Java and C# APIs and use UTF-8, but since this article is all about 'C' it deserves a quick mention here. For those not that acquainted with Unicode, just understand UTF-8 and UTF-16 are two different encodings of the same identical underlying string, and you can translate any string encoded in one form into a string encoded to the other form in 1 line of code very quickly and efficiently with no outside information, it's a simple algorithmic translation that loses no data.

SOOOO... let's take our example from "Rule #5" above: BzFile::FileSizeInBytes(const char *fileName)". The "fileName" is actually UTF-8 so that we can support file names in all languages such as Japanese (example: C:\tmp\子犬.txt). One of the nice properties of UTF-8 is that it is backward compatible with old US ascii, while fully supporting all international languages such as Japanese. The Macintosh file system API calls and the Linux file system API calls already take UTF-8, so their implementation is trivial. But so that the rest of our system can all speak UTF-8 and so we can write cross-platform code calling BzFile::FileSizeInBytes(const char *fileName)", on Windows the implementation must do a conversion step as follows:

int BzFile::FileSizeInBytes(const char *fileName)
{
#ifdef _WIN32
wchar_t utf16fileNameForMicrosoft[1024]; // an array of wchar_t is UTF-16 in Microsoft land
ConvertUtf8toUtf16(fileName, utf16fileNameForMicrosoft); // convert from Utf8 to Microsoft land!!
GetFileAttributesEx(utf16fileNameForMicrosoft, GetFileExInfoStandard, &fileAttr);
return (win32fileInfo.nFileSizeLow);
#endif
}

The above code is just approximate, and suffers from buffer over-run potentials, and doesn't handle files larger than 2 GB in size, but you get the idea. The most important thing to realize here is that as long as you take UTF-8 into account BEFORE STARTING YOUR ENTIRE PROJECT then supporting multi-platform (and also international characters) will be trivial. However, if you first write the code assuming Microsoft's unfortunate UTF-16, if you wait a year then try to port to the Macintosh and Linux with your code assuming UTF-16 or (heaven forbid) US-ASCII it will be a nightmare.


Rule #7: Don't Use 3rd Party "Application Frameworks" or "Runtime Environments" to make your code "Cross-Platform"

Third party libraries are great - for NEW FUNCTIONALITY you need. For example, I can't recommend OpenSSL highly enough, it's free, redistributable, implements incredibly secure, fast encryption, and we just could not have done better at Backblaze. But this is different than STARTING your project with some library that doesn't add any functionality other than it is supposed to make your programming efforts "Cross-Platform". The whole premise of the article you are reading is that 'C' and 'C++' are THEMSELVES cross platform, you do not need a library or "Application Framework" to make 'C' cross platform. This goes double for the GUI layer (see "Rule #2" above). If you use a so called "cross platform" GUI layer you will probably end up with an ugly and barely functioning GUI.

There are many of these Application Frameworks that will claim to save you time, but in the end they will just limit your application. The learning curve of these will eclipse any benefit they give you. Here are just a few examples: Qt (by TrollTech), ZooLib, GLUI/GLUT, CPLAT, GTK+, JAPI, etc.

In the worst examples, these application frameworks will bloat your application with enormous extra libraries and "runtime environments" and actually cause additional compatibility problems like if their runtimes do not install or update correctly then YOUR application will no longer install or run correctly on one of the platforms.

The astute reader might notice this almost conflicts with "Rule #5: Develop Your Own Base Libraries" but the key is that you really should develop your OWN cross platform base set of libraries, not try to use somebody else's. While this might be against the concept of code re-use, the fact is it take just a SMALL amount of time to wrap the few calls you will actually need yourself. And for this small penalty, it gives you full control over extending your own set of cross platform base library. It's worth doing just to de-mystify the whole thing. It will show you just how easy it really is.


Rule #8: The Raw Source Always Builds on All Platforms -> there isn't a "Script" to Transmogrify it to Compile

The important concept here is that the same exact foo.cpp and foo.h file can be checked out and always build on Windows, Macintosh, and Linux. I am mystified why this isn't universally understood and embraced, but if you compile OpenSSL there is a complex dance where you run a PERL script on the source code, THEN you build it. Don't get me wrong, I really appreciate that OpenSSL has amazing functionality, it is free, and it can be built on all these platforms with a moderate amount of effort. I just don't understand why the OpenSSL authors don't PRE-RUN the PERL script on the source code BEFORE they check it into the tree so that it can be compiled directly!

The whole point of this article is how to write software in a cross platform manner. I believe that the above PERL scripts allow the coders to make platform specific mistakes and have the PERL script hide those mistakes. Just write the source correctly from the beginning and you do not need the PERL script.


Rule #9: All Programmers Must Compile on All Platforms

It only takes about 10 minutes to teach any entry-level programmer how to checkout and build on a new platform they have never used before. You can write the 4 or 5 steps of instructions on the back of a business card and tape it to their monitor. If this entry level programmer wrote his or her new cross platform code according to the rules in this article, it will build flawlessly (or be fixable in a few minutes with very basic changes). THIS IS NOT SOME HUGE BURDEN. And ALL programmers must be responsible for keeping their code building on ALL platforms, or the whole cross platform effort won't go smoothly.

With a small programming team (maybe less than 20 programmers), it is just fine to use the source code control system (like Subversion or CVS) as the synchronization point between the multi-platform builds. By this I mean that once a new feature is implemented and tested on one platform (let's say Windows), the programmer checks it into Subversion, then IMMEDIATELY checks it out on both Macintosh and Linux and compiles it. This means that the build can be broken for one or two minutes while any small oversights are corrected. You might jump to the conclusion that the other programmers will commonly notice this 1 or 2 minute window once a day, but you would be dead wrong. In the last 15 years of using this system in teams ranging from 10 programmers to 50 programmers, I can only remember two incidents where I happened to "catch" the tree in a broken state due to this "cross platform issue window" and it only cost me 5 minutes of productivity to figure out I was just unlucky in timing. During that same 15 years there were probably HUNDREDS of broken builds I discovered that had nothing to do with cross-platform issues at all, or had specifically to do with a stubborn or incompetent programmer that refused to obey this "Rule #9" and compile the code themselves on all platforms. This last point leads me to our final rule below....


Rule #10: Fire The Lazy, Incompetent, or Bad-Attitude Programmers Who Can't Follow These Rules

Sometimes in a cross platform world the build can be broken on one platform, and that's Ok as long as it doesn't happen often. Your very best programmer can get distracted at the critical moment they were checking in code and forget to test the compile on the other platforms. But when one of your programmers is CONSISTENTLY breaking the build, day after day, and always on the same platform, it's time to fire that programmer. Once an organization has stated their goals to develop and deliver cross platform, it is simply unprofessional to monkey up the system by ignoring these rules.

Either the offending programmer is patently incompetent or criminally lazy, and either way it is way more than just "grounds for termination", I claim it is a moral obligation to fire that programmer. That programmer that is giving decent, hard-working programmers a bad name, and it reflects badly on our honorable profession to allow them to continue to produce bad code.


Conclusion: Cross Platform is Easy, but You Must Start from the First Line of Code!

If you don't take anything else away from this article, the most important point is that if you EVER want your software project to run on multiple platforms, then you must START your software project from the very first line of code as a cross-platform project. And if you just maintain the cross-platform philosophy the whole way along, the rest of these rules will come about naturally just as they did for us at Backblaze.

Tuesday, March 20, 2012

You Can't Innovate Like Apple

Source: here

When what you teach and develop every day has the title “Innovation” attached to it, you reach a point where you tire of hearing about Apple. Without question, nearly everyone believes the equation Apple = Innovation is a fundamental truth. Discover what makes them different. By Alain Breillatt

Apple! Apple! Apple! Magazines can’t possibly be wrong, so Apple is clearly the “Most Admired,” the “Most Innovative," and the “Master at Design.”(1, 2, 3, 4, 5)

Let me tell you, when what you teach and develop every day has the title “Innovation” attached to it, you reach a point where you tire of hearing about Apple. Without question, nearly everyone believes the equation Apple = Innovation is a fundamental truth—akin to the second law of thermodynamics, Boyle’s Law, or Moore’s Law.

But ask these same people if they understand exactly how Apple comes up with their ideas and what approach the company uses to develop blockbuster products—whether it is a fluky phenomenon or based on a repeatable set of governing principles—and you mostly get a dumbfounded stare. This response is what frustrates me most, because people worship what they don’t understand.

I’ve been meaning to write this article for some time, but finally sat down and put pixel to screen after coming across a description of "Michael Lopp’s (a Senior Engineering Manager at Apple) discussion of how Apple does design. The discussion happened during a panel—including John Gruber (yes, for you Apple heads, that “Daring Fireball” guy)—titled "Blood, Sweat, and Fear: Great Design Hurts", which was presented at SXSW Interactive on March 8, 2008. I scoured the Internet to find an audio or video recording, so I could garner these pearls of wisdom straight from the developers’ mouths. But no search engine I know could locate said files. If someone reads this and happens to have such a recording, please, please share!



Insights On Innovation

Without the recorded details, here is a collection of insights that various attendees created from their notes of the discussion—along with my own thoughts about what this portends for people who aspire to be like Apple. My intention is to synthesize these comments into a single representation of what Lopp and Gruber actually said.

Helen Walters at BusinessWeek.com summarized Lopp’s panel with five key points:

Apple thinks good design is a present. Lopp kicked off the session by discussing, of all things, the story of the obsessive design of the new Mentos box. You know Mentos, right? Remember the really odd packaging (paper rolls like Spree candy) promoted by some of the most bizarre ads on TV? It’s the candy that nobody I know eats; they just use it to create cola geysers.

Have you looked recently at the new packaging Mentos comes in? Lopp says the new box is a clean example of obsessive design, because the cardboard top locks open and then closes with a click. There’s an actual latch on the box, and it actually works. It’s not just a square box, but one that serves a function and works. I bought a box just so I could examine it more closely. It’s an ingenious design of subtle simplicity that works so well even shaking it upside down does not pop the box open.

According to Gruber, the build-up of anticipation leading to the opening of the present that Apple offers is an important—if not the most important—aspect of the enjoyment people derive from Apple’s products. This is because the world divides into two camps:

  1. There are those who open their presents before Christmas morning.

  2. There are those who wait. They set their presents under the tree and, like a child, agonize over the enormous anticipation of what will be in the box when they open it on Christmas morning.


Apple designs for #2. No other mass-consumer products company puts as much attention to detail into the fit and finish of the box—let alone the out-of-box experience. If you’re an Apple enthusiast, you can capture the Christmas morning experience more than once a year with every stop you make at the local Apple store.

Apple “wraps great ideas inside great ideas,” and the whole experience is linked as the present concept traces concentric circles from the core outward. Apple’s OS X operating system is the present waiting inside its sleek, beautiful hardware; its hardware is the present, artfully unveiled from inside the gorgeous box; the box is the present, waiting for your sticky little hands inside its museum-like Apple stores. And the bow tying it all together? Jobs’ dramatic keynote speeches, where the Christmas morning fervor is fanned on a grand stage by one of the business world’s most capable hype men.

Pixel-perfect mockups are critical. This is hard work and requires an enormous amount of time, but is necessary to give the complete feeling for the entire product. For those who aren’t familiar with the term, pixel perfect means the designers of a piece of Apple software create an exact image—down to the very pixel (the basic unit of composition on a computer or television display) —for every single interface screen and feature.

There is no “Lorem Ipsum” used as filler for content, either. At least one of the senior managers refuses to look at any mockups that contain such “Greek” filler. Doing this detailed mockup removes all ambiguity—everyone knows and can see and critique how the final product looks. It also means you will not encounter interpretative changes by the designer or engineer after the review, as they are filling in the content—something I have seen happen time and time again. Ultimately, it means no one can feign surprise when they see the real thing.

10 to 3 to 1. Take the pixel-perfect approach and pile on top of it the requirement that Apple designers expect to design 10 different mockups of any new feature under consideration. And these are not just crappy mockups; they all represent different, but really good, implementations that are faithful to the product specifications.

Then, by using specified criteria, they narrow these 10 ideas down to three options, which the team spends months further developing…until they finally narrow down to the one final concept that truly represents their best work for production.

This approach is intended to offer enormous latitude for creativity that breaks past restrictions. But it also means they inherently plan to throw away 90% of the work they do. I don’t know many organizations for which this would be an acceptable ratio. Your CFO would probably declare, “All I see is money going down the drain.” This is a major reason why I say you can’t innovate like Apple.

Paired design meetings. Every week, the teams of engineers and designers get together for two complementary meetings.

Brainstorm meeting—leave your hang-ups at the door and go crazy in developing various approaches to solving particular problems or enhancing existing designs. This meeting involves free thinking with absolutely no rules.

Production meeting—the absolute opposite of the brainstorm meeting, where the aim is to put structure around the crazy ideas and define the how to, why, and when.

These two meetings continue throughout the development of any application. If you have heard stories of Jobs discarding finished concepts at the very last minute, you understand why the team operates in this manner. It’s part of their corporate DNA of grueling perfection. But the balance does shift away from free thinking and more toward a production mindset as the application progresses—even while they keep the door open for creative thought at the latest stages.

Pony meetings. These meetings are scheduled every two weeks with the internal clients to educate the decision-makers on the design directions being explored and influence their perception of what the final product should be.
They’re called “pony” meetings because they correspond to Lopp’s description of the experience of senior managers dispensing their wisdom and wants to the development team when discussing the early specifications for the product.

“I want WYSIWIG…

I want it to support major browsers…

I want it to reflect the spirit of our company.”


[What???] In other words, I want a pony. Who doesn’t want a pony? A pony is gorgeous! Anyone who has been through this experience can tell you that these people are describing what they think they want. Lopp cops to reality in explaining that, since they sign the checks, you cannot simply ignore these senior managers. But you do have to manage their expectations and help align their vision with the team’s.

The meetings achieve this purpose and give a sense of control to senior management, so that they have visibility into the process and can influence the direction. Again, the purpose of this is to save the team from pursuing a line of direction that ultimately gets tossed because one of the decision makers wasn’t on board.

Now, if you want to get the quick summary of what we just discussed, I highly recommend reading Mike Rohde’s SXSW Interactive 2008 Sketchnotes. He took highly illustrated notes of the Lopp/Gruber panel. Content for this write-up also came from: Scott Fiddelke, Dylan at The Email Wars, Jared Christensen, David at BFG, and Tom Kershaw.



What else does Apple do differently?

If you read the various interviews that Jobs and Jonathan Ive (Senior Vice President, Industrial Design at Apple) have given over the last few years, you’ll find a few specific trends:

1. Apple does not do market research. This is straight from Jobs’ mouth: We do no market research. They scoff at the notion of target markets, and they don’t conduct focus groups. Why? Because everything Apple designs is based on Jobs’ and his team’s perceptions of what they think is cool. He elaborates:

"It’s not about pop culture, and it’s not about fooling people, and it’s not about convincing people that they want something they don’t. We figure out what we want. And I think we’re pretty good at having the right discipline to think through whether a lot of other people are going to want it, too. That’s what we get paid to do. So you can’t go out and ask people, you know, what’s the next big [thing.] There’s a great quote by Henry Ford, right? He said, ‘If I’d have asked my customers what they wanted, they would have told me ‘A faster horse.’"

Said another way, Jobs hires really smart people, and he lets them loose—but on a leash, since he overlooks it all with an extremely demanding eye. If you’re seeing visions of the “Great Eye” from J.R.R. Tolkien’s books, then you probably wouldn’t be too far off. Here’s the way their simple process works:

Start with a gut sense of an opportunity, and the conversations start rolling.
What do we hate?

A: Our cell phones.
What do we have the technology to make?

A: A cell phone with a Mac inside.
What would we like to own?

A: An iPhone, what else?


But Jobs also explained that in this specific conversation, there were big debates across the organization about whether or not they could and should do it. Ultimately, he looked around and said, “Let’s do it.”

I think it’s clear they also benefit from the inauspicious “leak” to the market. By that I mean this overly tight-lipped organization occasionally leaks early ideas to the market to see what kind of response they might generate. Again, what other company benefits from having thousands of adoring designers come up with beautifully rendered concepts of what they think the next great product should look like?

2. Apple has a very small team who designs their major products. Look at Ive and his team of a dozen to 20 designers who are the brains behind the genius products that Apple has delivered to the market since the iMac back in 1998. New product development is not farmed out across the organization, but instead is creatively driven by this select group of world-class designers.

Jobs himself has delegated away many of his day-to-day operational responsibilities to enable himself to focus half of his week on the high- and very low-level development efforts for specific products.

3. Apple owns their entire system. They are completely independent of reliance on anyone else to provide inputs to the design and development of their products. They own the OS, they own the software, and they own the hardware. No other consumer electronics organization can easily do what Apple does because they own all of the technology and control the intimate interactions that ultimately become the total user experience. There is no other way to ensure such a seamless experience—a single executive calls the final shots for every single component.

4. Apple focuses on a select group of products. Apple acts like a small boutique and develops beautiful, artistic products in a manner that makes it very difficult to scale up to broad and extensive product lines. Part of this is due to the level of attention to detail provided by their small teams of designers and engineers. To think that a multi-billion dollar company only has 30 major products is astounding, because their neighbors at that level of revenues have thousands of products in hundreds of different SKUs.

As Jobs explains, this is the focus that enables them to bring such an extensive level of attention to excellence. But it is also an inherently risky enterprise, because they are limited in what new product areas they can invest in if one fails.

5. Apple has a maniacal focus on perfection. They say Jobs had the marble for the floor at the New York Apple store shipped to California first so he could examine the veins. He also complained about the chamfer radius on the plastic case of an early prototype of the Macintosh. You had better believe, given the 10 to 3 to 1 approach for design, that every shadow, every pixel is scrutinized. It’s in their DNA.

They are willing to spend the money to make sure everything is perfect, because that is their mission.



So is it possible for you to innovate like Apple?

So given all this, what is a company to do if they want to innovate like Apple? First, forget about it unless you are willing to invest significantly and heavily to establish a culture of innovation like Apple’s. Because it’s not just about copying Apple’s approach and procedures. The vast majority of executives who say, “I want to be just like Apple,” have no idea what it really takes to achieve that level of success. What they’re saying is they want to be adored by their customers, they want to launch sexy products that cause the press to fall all over themselves, and they want to experience incredible financial growth. But they generally want to do it on the cheap.

To succeed at innovation as Apple has, you need the following:

You need a leader who prioritizes new product innovation. The CEO needs to be someone who looks out to the horizon and consistently sets a vision of innovation for the organization that he or she is willing to support completely with people, funds, and time. Further, that leader needs to be fluent in the language of your customer and the markets in which you compete. If the CEO cannot be this person, then he or she needs to be willing to trust that role to a senior executive and give that person the authority and latitude to effectively oversee the new product development process.

You need to focus. A cohesive vision describes the storyline for your products and services. That storyline needs to state decisively what is in bounds and what is out-of-bounds over an 18-month to 3-year period. Everyone in the development process who matters must be in lockstep with this vision, which means you need to have open lines of communication that are regularly and consistently managed.

This storyline or strategic vision needs to be revised according to market changes and the evolution of your new product pipeline. It helps that Apple tends to approach their products with a systemic frame of mind, looking to develop the “total solution” rather than just loosely joined components.

Obviously, the other focus is to make a profit, since that is what supports the continued efforts to design the next great product. And, when every one of the major products is a moon shot, they have to work to ensure it meets exacting standards—to do everything they can to ensure success.

You need to know your customer and your market. Jobs and team can get away with not doing market research, identifying target markets, or going out and talking with customers because of the markets they play in and the cult-like customers who adore them. Most technology companies also believe they can get away with this—and most technology companies get it wrong.

Quick, identify 10 different pieces of technology that truly meet your needs and that don’t bug you due to a major flaw you either have to live with or compensate for in some fashion. Could you come up with more than five? I didn’t think so.

We’re drowning in a sea of technological crap, because every product that is released to the market is a result of multiple compromises based on decisions made by the product manager, the engineering manager, the marketing manager, the sales manager, and everyone else who has skin in the game as they prepare the offering to meet what they think are the target customer’s needs.

The reason Jobs and Ive get it right is because they design sexy products with elegant and simple interfaces—for themselves. And they count on their hip gaggle of early adopters to see it the same way. Once the snowball starts rolling, it’s all momentum from there.

Apple doesn’t sell functional products; they sell fashionable pieces of functional art. That present you’re unwrapping is all about emotional connection. And Jobs knows his marketplace better than anyone else.

Because you’re not Apple and you are likely not selling a similar set of products, you must do research to understand the customer. And, while I’m sure Jobs says he doesn’t do research, it’s pretty clear that his team goes out to thoroughly study behaviors and interests of those they think will be their early adopters. Call it talking to friends and family; but, honestly, you know that these guys live by immersing themselves in the hip culture of music, video, mobile, and computing.

The point is not to go ask your customers what they want. If you ask that question in the formative stages, then you’re doing it wrong. The point is to go immerse yourself in their environment and ask lots of “why” questions until you have thoroughly explored the ins and outs of their decision making, needs, wants, and problems. At that point, you should be able to break their needs and the opportunities down into a few simple statements of truth.

As Alan Cooper says, how can you help an end user achieve the goal if you don’t know what it is? You have to build a persona or model that accurately describes the objectives of your consumers and the problems they face with the existing solutions. The real benefit, as I saw in my years working at InstallShield and Macrovision, is that unless you put a face and expectations on that consumer, then disagreements about features or product positioning or design come down to who can pull the greatest political will—rather than who has the cleanest interpretation of the consumer’s need.

You need the right people, and you need to reward them.

The designers at Apple are paid 50% more than their counterparts at other organizations. These designers aren’t working at Apple simply because they’re paid more. They stay at Apple because of the amazing things they get to do there. Rewards are about salary and benefits, but they are also about recognition and being able to do satisfying work that challenges the mind and allows the creative muscles to stretch. Part of this also comes down to ensuring your teams are passionate about innovation and dedicated to the focus of the organization. As Jobs says, he looks for people who are crazy about Apple. So you need to look closely at the people you are hiring and whether you have the right team in the first place.



About Author

Alain Breillatt is a product manager with more than 14 years of experience in bringing new products and services to market. His previous professional lives have carried him through medical device R&D, consumer credit, IT management, software product management, and new product consulting at companies including Baxter, Sears, InstallShield, Macrovision, and Kuczmarski & Associates. As a consultant he has generated new product portfolios for Fortune 500 and smaller organizations and developed course materials on innovation for the MBA and Executive Education programs at Northwestern University’s Kellogg School of Management.

Alain is a Director of Product Management for The Nielsen Company’s syndicated consumer research solutions. Contact Alain at abreillatt@gmail.com or catch his latest insights at http://pictureimperfect.net