> As early as 1981, I pointed out that by reducing the number of objects that I had to keep track of explicitly from many tens of thousands to a few dozens, I had reduced the intellectual effort needed to get the program right from a Herculean task to something manageable, or even easy.
Fast forward a few decades, and we're still very much on this journey of finding the right abstractions/interfaces/libraries/languages. I feel like there must be a complexity equivalent to Parkinson's law: complexity expands to fill the space left in between abstractions.
auto_ptr was so misdesigned, they erased it from the language in a later standard.
The problem was that the assignment operator “T x = y;” is supposed to have copy semantics, but auto_ptr gave it move semantics by overwriting the source to destroy that reference. This broke all sorts of code that rightfully assumed assignment is done by copying. Therefore all kinda of algorithms like sort broke, as well as resource management. Suffice to say I am personally not a fan of C++ language design or Stroustrup’s opinions on programming.
I have some very smart friends who think it's the perfect language, but I kind of prefer almost every language that has come out after C++. I feel like the language adds some very strange semantics in some very strange places that can be hard to reason about until you've spent a lot of time with the language. This wouldn't necessarily be so bad if not for the fact that most people who write C++ have not spent sufficient time to understand it (and I consider myself in that group, though I don't write C++).
I have mixed feelings on D, but I'm very grateful that Rust came along. Rust is arguably even more complicated than C++, but the good thing is that getting a lot of these complications wrong will simply not allow your code to compile. It's a huge pain in the ass at first but I've become grateful for it.
I still write C very occasionally, but Rust has supplanted like 95% of jobs that were formerly C. I still really need to play with Zig.
Personally I use C or something, anything other than C++ really, if I need something more ergonomic or provably correct. Many excuse C++’s design history with “they didn’t know better”, but the oft forgotten history explained in that video shows otherwise.
auto_ptr was effectively split into std::unique_ptr and std::shared_ptr. The problem was that before C++11, there wasn't a way to distinguish between copy assignment and move assignment.
Correct. But the C++ language is not a language in which one should just say “screw it” and overload the assignment operator of all things while breaking its contract. Stuff like this is why many purists argue that operator overloading shouldn’t be a thing, because it can lead to shenanigans like this.
So while a much older date is probably appropriate, maybe 20-30 years ago, we can at least mark this (2022) until somebody justifies a particular previous date.
This hides memory allocations altogether. As long as the open/close functions are paired up, it gives me confidence that there are no inadvertent memory leaks. Using small functions eases eyeballing the couplings.
For C++, developing a unit test framework based on Catch2 and ASAN that tracks new/delete invocations is rather powerful. You can even set it up to discount false positives from static allocations. When the unit tests exercise the classes, you get memory leak detection for free.
(I don't mind down votes, but at least reply with what you don't like about this approach, and perhaps suggest a newer approach that we can learn from; contribute to the conversation, please.)
I think more people need to see this. This is how the creator of C++ thinks we should be writing code. This is what he thinks code should look like. To split a string by whitespace we should use `while (cin >> s)`. We should have a `typedef` in the middle of functions. Iterations should use `.begin()` and `.end()` everywhere. There might even be a bug with a trailing "+" appearing in the output?
Imagine if this was a new language that the dev community was seeing for the first time. It's hard to imagine it gaining much traction.
The FAQ was mostly written before C++11 with some updates since then. I don't think he is rewriting every code snippet to match modern styles. It is an enormous FAQ and not meant as an introduction to the language.
Good arguments. But for some reason there are still strange people, who prefer calling malloc/free (or new/delete or something similar) manually. Some of them even invent languages with manual-only memory management (like Zig or Odin).
One reason is that c++ still hasn't gotten 'trivial relocatability' right - i.e being able to memcpy/memmove and not have to call constructors/destructors when growing your vector class.
Actually, issues with non-trivial moves and relocations are specific only for C++. Some other languages (notably Rust and Swift) don't have such issues, but still have nice automatic memory management via destructors and containers atop of it.
C++ compilers optimize-out empty destructor calls and sometimes even replace calls to move constructors/move assignment operators via memcpy. But it's unfortunately not guaranteed in all cases due to constrains of the C++ object/memory model designed initially without proper move semantics.
I'm no fan of Odin especially, but I'd expect that one obvious defence they'd offer is that this code potentially wastes a lot of resources and if you were writing in their language you'd more likely go "Wait, that seems like a bad idea..." and produce better code.
cat += *p+"+";
Feels very cheap because it was so few keystrokes, but what it's actually doing is:
1. Making a brand new std::string with the same text inside it as `p` but one longer so as to contain an extra plus symbol. Let's call this temporary string `tmp`
2. Paste that whole string on the end of the string named `cat`
3. Destroy `tmp` freeing the associated allocation if there is one
Now, C++ isn't a complete trash fire, the `cat` std::string is† an amortized constant time allocated growable array under the hood. Not to the same extent as Rust's String (which literally is Vec<u8> inside) but morally that's what is going on, so the appends to `cat` aren't a big performance disaster. But we are making a new string, which means potentially allocating, each time around the loop, and that's the exact sort of costly perf leak that a Zig or Odin programmer would notice here.
† All modern C++ std::string implementations use a crap short string optimisation, the most important thing this is doing - which is the big win for C++ is they can store their empty value, a single zero byte, without allocating but they can all store a few bytes of actual text before allocating. This might matter for your input strings if they are fairly short like "Bjarne" "Stroustrup" and "Fool" but it can't do "Disestablishmentarianism".
I agree, having nice containers with automatic memory management allows such problems. But this code still works as intended, but it has suboptimal performance. But I think, that it's still better to use an approach allowing such performance issues, rather then bugs specific for manual memory management (memory leaks, use-after-free errors, spatial access errors).
And it's still possible to improve performance here without returning to manual memory management. Just replace it with something like this:
cat += *p;
cat += "+";
Now no temporary string is created and thrown away, only cat performs memory allocations under the hood.
No need to insult people just because you don’t understand other strategies for reducing the amount of lifetimes to track and consolidating deallocations by using memory arenas.
You can use some arena implementation in C++ too. But only when you need such an approach. If you don't care - just use std::string, std::vector or something similar.
The C++ standard library interface is broken regarding its abstraction of allocation (according to its authors). Therefore you in fact can’t just use arenas in C++ without giving up on large parts of its standard library and becoming incompatible with other code. The languages whose users you call strange don’t have this issue.
I think it's a reasonable price to have your own containers (vector/unique_ptr replacements), if you wish to use a non-standard approach for memory allocation. Many people do this, like Qt with QVector.
But do you really need arenas? Does doing allocations in a traditional way creates a bottleneck in your specific use-case? Or you just want to justify broad manual memory management (with its bugs and secure vulnerabilities) in hope to gain (or not) a tiny amount of extra performance?
Respectfully, you do not seem to understand this topic well. If the C++ standard library design abstracted memory allocation correctly, you’d still be able to use containers and algorithms and not even notice that their allocation strategy uses the heap, stack, a bump allocator, a pool, etc
I understand how C++ standard containers are designed. They are designed in such a way, so that they don't store a pointer to an allocator instance inside. This allows to save memory in the most cases (where the standard allocator is used). Overriding an allocator is possible, but only with some another global allocator (via a template argument), with no possibility to specify a per-object allocator.
As I know in newer C++ standards there is something which allows to workaround this issue. Or at least there are proposals for this.
Use-after-free bugs are still possible, if an allocated memory chunk outlives the arena instance used for it. With C++-style owning containers such kind of errors is possible too, but it's not so frequent.
But arenas can't be used in any case. They are suitable only if large amounts of allocations take place once and need to be deallocated all at once. If reallocation of freeing of individual memory chunks is needed, arenas can't be used, so, it's needed to manage each allocation individually, for which containers is a better choice compared to manual memory management.
Fast forward a few decades, and we're still very much on this journey of finding the right abstractions/interfaces/libraries/languages. I feel like there must be a complexity equivalent to Parkinson's law: complexity expands to fill the space left in between abstractions.
I have some very smart friends who think it's the perfect language, but I kind of prefer almost every language that has come out after C++. I feel like the language adds some very strange semantics in some very strange places that can be hard to reason about until you've spent a lot of time with the language. This wouldn't necessarily be so bad if not for the fact that most people who write C++ have not spent sufficient time to understand it (and I consider myself in that group, though I don't write C++).
I have mixed feelings on D, but I'm very grateful that Rust came along. Rust is arguably even more complicated than C++, but the good thing is that getting a lot of these complications wrong will simply not allow your code to compile. It's a huge pain in the ass at first but I've become grateful for it.
I still write C very occasionally, but Rust has supplanted like 95% of jobs that were formerly C. I still really need to play with Zig.
Personally I use C or something, anything other than C++ really, if I need something more ergonomic or provably correct. Many excuse C++’s design history with “they didn’t know better”, but the oft forgotten history explained in that video shows otherwise.
So while a much older date is probably appropriate, maybe 20-30 years ago, we can at least mark this (2022) until somebody justifies a particular previous date.
https://repo.autonoma.ca/repo/mandelbrot/blob/HEAD/main.c
When writing:
I will immediately write below it: This hides memory allocations altogether. As long as the open/close functions are paired up, it gives me confidence that there are no inadvertent memory leaks. Using small functions eases eyeballing the couplings.For C++, developing a unit test framework based on Catch2 and ASAN that tracks new/delete invocations is rather powerful. You can even set it up to discount false positives from static allocations. When the unit tests exercise the classes, you get memory leak detection for free.
(I don't mind down votes, but at least reply with what you don't like about this approach, and perhaps suggest a newer approach that we can learn from; contribute to the conversation, please.)
Imagine if this was a new language that the dev community was seeing for the first time. It's hard to imagine it gaining much traction.
1. Making a brand new std::string with the same text inside it as `p` but one longer so as to contain an extra plus symbol. Let's call this temporary string `tmp`
2. Paste that whole string on the end of the string named `cat`
3. Destroy `tmp` freeing the associated allocation if there is one
Now, C++ isn't a complete trash fire, the `cat` std::string is† an amortized constant time allocated growable array under the hood. Not to the same extent as Rust's String (which literally is Vec<u8> inside) but morally that's what is going on, so the appends to `cat` aren't a big performance disaster. But we are making a new string, which means potentially allocating, each time around the loop, and that's the exact sort of costly perf leak that a Zig or Odin programmer would notice here.
† All modern C++ std::string implementations use a crap short string optimisation, the most important thing this is doing - which is the big win for C++ is they can store their empty value, a single zero byte, without allocating but they can all store a few bytes of actual text before allocating. This might matter for your input strings if they are fairly short like "Bjarne" "Stroustrup" and "Fool" but it can't do "Disestablishmentarianism".
And it's still possible to improve performance here without returning to manual memory management. Just replace it with something like this:
Now no temporary string is created and thrown away, only cat performs memory allocations under the hood.But do you really need arenas? Does doing allocations in a traditional way creates a bottleneck in your specific use-case? Or you just want to justify broad manual memory management (with its bugs and secure vulnerabilities) in hope to gain (or not) a tiny amount of extra performance?
As I know in newer C++ standards there is something which allows to workaround this issue. Or at least there are proposals for this.
But arenas can't be used in any case. They are suitable only if large amounts of allocations take place once and need to be deallocated all at once. If reallocation of freeing of individual memory chunks is needed, arenas can't be used, so, it's needed to manage each allocation individually, for which containers is a better choice compared to manual memory management.