The blog is a take on from this year’s CppCon written by our Compiler Whisperer, Nenad Mikša. Additionally, Nenad is leading the team for tools and infrastructure in Microblink. 

This year our senior C++ developers visited CppCon C++ Conference, the largest and most important C++ conference in the world. It was held from September 16th to September 20th in Aurora, Colorado. Compared to Meeting C++, the CppCon is much bigger and covers a range of topics for all levels, from beginners and students to seasoned professionals in different industries, from embedded to the cloud. The conference opened with Bjarne’s keynote about the current state of C++ and its future evolution, named C++20: C++ at 40. Next year, C++ will celebrate its 40th birthday. Bjarne Stroustrup talked about the progress and challenges of the lnguage through those 40 years.

C++ versatility

As the most widely used language today, C++ is used in the aerospace industry, powering software in aircraft, spaceships, and rovers on other planets. Also, in the automotive industry, powering automobiles, infotainment systems, and even self-driving cars. It’s also the most dominant language in the gaming industry, as every significant game engine is written in C++. Besides that, some prominent software tools, such as professional image and video editors (Photoshop, Premiere Pro, etc.), are written in C++. Most of the internet search infrastructure is written in C++ (Google’s search engine). Even mobile apps have parts written in C++ for getting the most performance out of the device and for cross-compatibility between mobile platforms. Precisely the reason why we also use C++ as our primary programming language.

Having a single language supporting such a wide variety of platforms and use-cases is a rather challenging task, but C++ can cope with it. However, this made C++ language one of the most complex programming languages today - for both using and learning. For that matter, the ISO committee, which steers the development of the C++ language, is working hard to address those issues, while still making sure that all those features that enabled Cpp's success remain intact.

The main idea is to make C++ easy to learn and use while keeping excellent performance. The good management of complexity lies in what Bjarne calls The onion principle.  The higher-level concepts are implemented using lower-level concepts, without actually hiding them (a common practice in other languages). The developers can comfortably use various concepts in their code while seeing how those concepts are implemented. That lets a developer learn about how common concepts are working behind the scene and gives them the ability to step lower and apply their concepts, if they need it, usually to get better performance. However, stepping lower and using more low-level concepts require more experience and can cause unexpected results if used improperly. Therefore, the onion principle is the perfect metaphor - the more layers you peel off, the more you cry, but also, the more potent the onion is.

The onion principle of C++

The second keynote was by Andrei Alexandrescu. He talked about increasing the speed of the algorithms, referring back to the talk about Design by Introspection from last year’s Meeting C++, as well as arguing that the fastest code is not always the fastest algorithm.

Ben Smith, one of the authors of the WebAssembly, had the third keynote about compiling C++ into WebAssembly. He explained how he ported the clang compiler to the WebAssembly and made it work directly in the browser. 

The fourth keynote was about Relationships in Code by Sean Parent. In this keynote, he talked about connections between different parts of code and between code and the developer. By better understanding those relationships, you can write better algorithms, instead of relying on too generic ones, which may make your code too bloated and underperforming.

The final and most talked about keynote was Herb Sutter's about Making RTTI and Exceptions More Affordable and Usable. Herb is the current president of the ISO committee, and he spoke of the most painful part of the C++ - runtime type information and exceptions. Those two features were added to C++ back in 1998 and are still the only two features in C++ that break the Zero Overhead Principle. The Zero Overhead principle states that you should not pay for features that you don't use, and when you use them, you should not be able to create a more efficient version of those features than those provided by the language.

The runtime type information feature lets you inspect an object's type at runtime from your code. However, C++ achieves that by automatically adding lots of information to your types and classes even though you never use that information. The same can be more efficiently implemented with plain enumerations, which is something we also do at Microblink.

The exceptions are the feature of the C++ language meant to enable easy error handling. Instead of polluting your function's signature with error-handling code, you can throw an exception in case of error. However, this feature is currently very poorly implemented in all C++ compilers due to some requirements that C++ standard mandates on the exceptions feature. That results in both runtime and code size penalty, even if you don't use exceptions at all. For example, after we disabled C++ exceptions in our codebase, BlinkID SDK for Android's binary size was reduced by over 1 MB per architecture. Since BlinkID SDK for Android contains 4 CPU architectures, disabling the C++ exceptions reduced BlinkID SDK size for more than 4 MB - only by disabling something we never used in the first place.

As can be seen from these examples, the two features are the current pain point of the C++ language, mainly because those are a part of the C++ standard. Even though all major C++ compilers let you disable those features, by doing so, you are no longer using the standard C++. That can lead to undefined behavior when combining your code with third party C++ library, which was written in standard C++ and uses exceptions.

Herb set out on a mission to solve those problems by proposing two standardization options. One is the introduction of static reflection support, which will let you examine your types at compile time and optionally write some runtime type information if you need it. This feature will also enable design by introspection, which Andrei talked about. Another proposal is about static exceptions, which will use a different mechanism for propagating the exception to the caller without the need for big overheads as current exceptions do.

We hope that those proposals will soon become part of the C++ standard.

Also, CppCon was rich with different talks split into 6-8 parallel tracks, so it was impossible to attend all of them. Lots of talks covered new features in C++20, such as ranges, concepts, coroutines, and modules. At Microblink, we're looking forward to those and will start using them as soon as all our compilers start supporting them. One of my favorite talks was Ben Steagall’s The Business Value of a Good API (Watch it here). He was explaining how businesses should invest in creating quality APIs to reduce misuse of their software and financial damage that comes with it.

Expensive software failures

All of the failures listed in the picture above happened due to improperly designed API. It was not intuitive to use, so integrators made costly mistakes while struggling to use APIs developed by other teams.

Creating a good API for your software library is a complicated and expensive task. However, it creates a value that can be considered as software capital. However, if you create a lousy API for your software library, it will be much cheaper to produce and, initially, faster to deploy. Still, it will create a technical debt that will cost more the longer you neglect to refactor it.

Technical Debt vs Software Capital

To increase software capital and reduce technical debt, Ben suggests something radical, like using the Rational Unified Process instead of SCRUM methodology. Ben argues that SCRUM focuses too much on the quick delivery of features, which usually requires extending or even breaking the API of your software library. There is nothing wrong with that methodology if done correctly. However, that's a rare case, as the drive to produce as many features as possible leaves too little time for adequately designing an API of your library - the reason why so many libraries out there have poorly designed APIs.

Why are so many APIs so awful?

I think that RUP is excellent for big projects like building the software for aircraft or spaceship. However, for developing software like BlinkID, where clients' desires drive most features, SCRUM fits better. But we need to be careful with those features with how they influence the API that our users will use. We're proud of our API that will withhold all new features planned in BlinkID and our other SDKs. Creating this API was a hybrid Rational Unified Process and took several months to design and a few more to implement it. But by having a good API design, we can now focus on producing new features for our users using agile methodologies while just making sure that our API will not get broken or bloated.

Networking

One of the main aspects of every conference is the networking, of course. At the CppCon, we had a chance to exchange our development experiences with other fellow C++ engineers. One of the most exciting gatherings was the Mobile C++ development roundtable, where exchanged experiences about C++ in mobile app development. 

The fireside chat with the C++ ISO committee, where the committee was answering developers' questions for more than a hour and a half.

The most valuable meetings were, of course, with compiler developers and the ISO committee, where we discussed bugs in compilers and the challenges of building our code for all the platforms we support. We took the opportunity to ask the committee questions about why some features within C++ are like they are, and couldn't be better (most notably std::function). We talked with the Microsoft compiler team and described some very peculiar bugs that we stumbled upon in their compiler. With LLVM developers, we spoke about upcoming features in new clang and state of the WebAssembly support and had a quick meeting with the Conan team to discuss some issues we had with using the package manager.

All in all, we were thrilled to be able to attend this year's CppCon and are looking forward to attending it also the next year.