I know from experience that the verbosity of COBOL is helpful if you are a maintenance programmer trying to figure out what the code was meant to do, especially if the program abends at 3 AM and you have to get the batch program working before 4 am.
I would be sure to shout ABORT if anyone started to code in COBOL where I could see it. And though I could understand people being iffy about abortions in a pregnancy situation, I would expect someone to really try to justify their use of COBOL.
Fortran is still a popular language and receives updates. I just downloaded the AOCC Flang compiler, optimized for Zen 3 processors and the AMD Epyc platform specifically.
This ^^^ is the only language I use primarily (77, 90/95 and 2003) but I had to learn C++ for my job at NOAA. Funny because I know a dozen other languages, but nothing beats the simplicity and speed of C and Fortran.
one comment in COBOL's defense - this is not what it was designed for. The fact that it can crunch these numbers at a not-too-shabby rate is pretty cool, but it was designed as a "Business Oriented Language" (it's right there in the name). Having said that, as a (former) COBOLer, if/when you do find that your code is not performing, you find the "hot-spot(s)" and you rejig & optimise them. Usually the Pareto (80/20) principle applies, where 80% (ish) of the work is being done in 20% (ish) of the code - so you find that 20% and work it over and get a bump in performance - perhaps even coding it in another language (assembler being popular ;) - and then call that from within the COBOL code.
As someone that has coded in FORTRAN 77 and Fortran 95 (yes, they changed the case of the language name), I can say they are very different beasts. The former is close to the original versions, with strict column formatting to fit on punch cards. The latter is heavily influenced by C, and could be arguably considered a completely new language.
The most recent verion of Fortran is actually 2018. The version shown uses a bit of Fortran 2008 but does likely work with most Fortran 2003 compilers. Still FORTRAN77 code can still be compiled with no or very little chsnges on modern compilers. Like for C++ there is a policy of breaking no existing standard-conforming code.
@@johannweber5185 C++ has started to deprecate and remove some very old unpopular features that get in the way of better new stuff. Though it is a slow and well documented process to give folks a fighting chance at patching, and compilers can still be set to use the old standard(or an old compiler used), but a compiler set on c++20 may not work with 30 year old code. Of course after initial file compiling they can all [including fortran] be stuffed through the same linkers.
No wonder you felt at home in Fortran. Fortran has constantly been updated so that it now looks like any modern Algol-descendent language. The Fortran code you showed in the video has very little to do with the FORTRAN IV language I learned programming with in the mid 70s. No blocks, no character variables, no dynamic memory allocation, fixed colums (statements must be in columns 7 to 72). One Fortran-motivated habit I still have is that in most languages I use my integer variables still start with letters i to n.
The Fortran language is specifically designed to allow the compiler to do a better job of optimization. So a program that conforms to the standard but that is not specifically manually optimized is more likely to be better optimized than in other compiled languages. However, you can very often manually optimize your code in most any language and achieve close to maximum possible performance with very little difference between languages that are similarly manually optimized. Fortran has increased its support for object orientation dramatically over the years however and is increasingly challenging for compilers.
Fortran is a language of now, the best to do linear algebra in by far. Mostly because of the big daddy library of them all, glorious LAPACK. When I did linear algebra in C++ I still called back into Fortran. FYI, I am 30.
If you want something built for linear algebra try MIT Julia. If you want speed then fortran SIMD for Intel knights landing etc... Is still decent. I think the programmer, the compiler, and the CPU architecture are all necessary to get fast fortran code. Someone can write slow fortran code by accident but that doesn't prove fortran is slow exactly.
@@oraz. you do an extern function declaration in the Fortran module. G++ and gfortan has decent documentation on it. I think lots of prebuilt wrappers exist as well.
I've found that different language libraries can have wildly different efficiencies in their "get_current_time" functions, and it can be a heavyweight call in any case (relative to any purely CPU-based calculation). As such, by putting the time function in every pass of your loop as you do, you may be measuring the speed/slowness of the "time" function more than your "calculate" pass. I'd suggest instead doing an initial "get time", then a fixed (large) number of your calculation passes, then a final "get time" to determine the total elapsed time and dividing number_of_passes by elapsed_time.
Hallo, your point is very interesting. So I checked how many times gfortran was able to ask for the time within 5 seconds. On a AMD Ryzen 7 3700Xmachine it was 174 million times. So I do not think that the computational effort of determining the time has a significant impact on the overal performance.
Did you check to see how many of those time calls were cached by the CPU if they were less than 1ms apart? In this example the time function was called every 4ms. The result was never cached and the cache was possibly flushed by the OS or by other instructions. It would be better to run headless without all these other threads running as part of the kernel.
@@Ureallydontknow I do not know wheter this is an answer, but the call foes not exactly call for the wallclock time but for some counter that then is divided by the ticks per second. I guess that are only a feq assembler instructions and directly accesses the CPUbut was surprised myself about the speed.
@@PEGuyMadison Interesting, but unfortunately I do not know much about what actually happens at the gernel leven when calling SYSTEM_CLOCK (the subroutine actually used). Well, at least the measured time apears to be consistent with the elapsed wall-clock time.
if you've spent time working in it, you're right it does become quite comfortable. We did COBOL intensively for years at uni, and then I started my first full-time job as a C programmer. I've got say, that C code did look and feel to be too terse and unreadable initially. The world went from 'write code in a way that can be read and understood easily by someone else) over to a world of 'let's see how much of a hero we can be in terms of packing as much as possible into a single line of unreadable code'!
You referred to the "historical oddity" of Fortran using .lt. and .gt. for < and > respectively. I suspect it's because when Fortran was developed there were no < and > signs on most if not all keyboards. Way back then they probably only had letters, numbers, and a few punctuation symbols on their keyboards.
Fun video, but you have to choose your drag race if you want your language to shine. I would choose or at least value the languages for the following sweet spots: C/C++ for systems programming: operating systems, compilers, dynamic memory usage scenarios (heaps, pointers). Fortran for highly performant mathematical programming using arrays (matrices) Cobol for efficient business data processing. Think fixed format records, requiring business logic and transactions to manage them. TLDR; Speaking as someone who has professionally programmed in Fortran, Cobol and C#. Two things have happened over the last 30-40 years which has undermined Cobol and Fortran in the mainstream: object orientation and non fixed format data processing (XML etc). Back in the 1960s and 70s mainframe business data was inputted on cards (one per line) in fixed format, read by card readers. Files were organised as a sequence of records not bytes. Cobol (and Fortran) were designed to process data in that format. Cobol supports fixed format record structures (which would all be pulled in from source code include files in a library) that may have been generated and managed through data dictionary products and processes. Such that a single read IO on a file on disk would read in sometimes hundreds of fields of data for many records straight into the memory buffer of the file description format in the program. Which would then be immediately available for processing by the Cobol business logic. No tedious and very compute expensive XML parsing and deserialisation (yada yada) required like in OO languages. Cobol was far more efficient in compute, and needed to be because of the cpus of that time. Also, although Cobol is verbose (literally), I would contend that there are more function points per 1000 lines of code than your typical C#. This doesn't mean I like Cobol, I don't. It's not fun if you come from a Comp Sci background. I would never want to write a Cobol sort algorithm, or parse free format data or XML in Cobol. Just wasn't designed for it. Surely, you ask, businesses that still depend on Cobol (and it's the largest ones that do) need efficient sorting, so how does that work? External sorting routines of flat file data using sort keys etc were executed before the Cobol program started. Tools were SYNCSORT, DFSORT on the mainframe. These tools were highly optimised for IO and compute using virtual page management fixing and other tricks. Fortran on the other hand, by design, is highly performant for array processing. All the most optimised numerical libraries are (or at least were) written in Fortran. Converting them to C/C++ is likely to be less efficient, at least in the default configuration. One reason is the native support of complex numbers. The other big reason is that subroutine parameters are assumed to not overlap each other in memory. In C/C++ any two function parameters are assumed that they might overlap in memory (unless you tell the compiler otherwise). That's the assembler view of the world but without bringing the knowledge of the programmer to the machine code. Why is that unperformant? When the C/C++ compiler generates the machine code for manipulations of multiple passed in arrays, it sometimes has to re-fetch from memory the first array parameter because an update was done to the other array parameter and the compiler doesn't know the first one hasn't been altered. So it can't retain it in a register. That *kills* array performance when you have multiple arrays being processed in an inner loop. The fix in C/C++, I think, is to tell the compiler that the parameters are not aliases of the same address (if it supports it, and the programmer knows about this stuff). C/C++ is more like assembler, you can do anything but you might blow off your leg in the process. You have to be much more sophisticated to get the same performance and consequently stability when trying to play in the other languages areas of strength. Old style Fortran (Fortran 66), was a bit dangerous because it was white space insensitive, so if you missed out critical tokens the statement would mean something quite different but still compile. For example, the beginning statement of a FOR loop (called a DO loop): DO 1000 I=1,200 to repeat a section of code ending at line 1000 iterating variable I to 200 could be accidentally written as DO 1000 I=1.200 which with the blanks stripped out was an assignment statement of the value 1.2 to an implicitly typed REAL variable DO1000I rather than a loop declaration! Apocryphally, this "." instead of "," caused a rocket to crash. Fortran 66 didn't have string variables, you had to store string constants in INT arrays, if you had to use them, and you outputted them as Hollerith literals in FORMAT statements. String concatenation? Forget it. The solutions were not pretty and were not very reliable. So drag racing such different languages is kind of pointless. Eratosthenes sieve is not going to be the best fit. Fun though it is.
Some friendly nitpicking on Fortran :) At 10:15 the video says "there's no equivalent of a "main" function - it appears that program flow starts at the top of the file and flows on down" In fact there is an equivalent to a "main" method, and that is the "program primes_fortran" statement at line 1 of code. That is the entry point to the code. 10:40 "the contains keyword indicates where the subroutine and functions begin". Not wrong, but not the whole story: you could have put all functions and subroutines outside of the "program primes_fortran" (eg they could be in other files), in which case you don't need to use "contains". Using "contains" makes the corresponding functions internal, and allows more checks to be done at compile-time (eg that the type of passed arguments is correct) and gives some extra functionalities (eg you can recover an array's size and shape with size()/shape() build-in methods; otherwise you explicitely have to pass the dimensions as further parameters). 15:21 "Fortran doesn't use a conventional less than and greater than symbols". Starting from Fortran 90 it is totally possible to use =, ==, /= for comparisons. BTW, /= is used for "different from" (NOT 'divide by'!) because "!" is used to indicate comments; a small quirk is that for comparing boolean values you shouldn't use == and /= but .eqv. and .neqv. (equivalent/not equivalent). ASAIK boolean operators were kept separate from standard ones because they are assigned higher priority.
going completely on a tangent here but some more fun (?) facts (not about fortran): note that /= is used in common lisp too (and common lisp (and of course several other lisps, and perl) has (have) similar quirks re: different operators for different types). other languages do use other non-standard symbols for not equals too - like ~= (lua, matlab) and (sql apparently... i'm pretty sure other languages have this one too?) as for different priorities (precedences), this can be seen e.g. in ruby too, like `&&` and `||` vs `and` and `or` (`and` and `or` have lower precedence - basically, pretend they're keywords)
For your second point, that was probably the case in a legacy version of Fortran you've worked with, but it isn't the case on modern fortran. You can create an explicit interface for subroutines and functions using a module. In a separate module file, you can decalre subroutines and functions and then call them in your main program by using the USE keyword followed by the module name at the top of the driver program. This use association provides an interface that allows dimensions of arrays to be passed to external procedures stored in the module. You can do all the same wonderful things like passing in implicit shape arrays and getting compiler error checks without the ugliness of having all the procedures tacked on at the bottom of the main program.
No offense to Mr. Van Bakel, but this was his first COBOL program (as he stated in the notes for the code). I've been slinging code for 40 years with about 30 working in COBOL. Most of that has been in processing huge volumes of transactions, and one of my focuses has been efficiency. I'd have written that very differently. There are optimization tricks I'd have used, such as avoiding COMP-3 and coding the core loop in line (since PERFORM does a jump).
I'd like to notice that the Github repository also contains an object-oriented Fortran proram that shows a more modern Fortran code (the performance is similar or slightly faster than the shown version).
@@luckyluckydog123 Intel's OneAPI licensing is a bit confusing. It says that it is free for all purposes, and a license is not required to download, install, or use it, but commercial licenses and even academic ones can be still requested and issued. I did not have the time to figure out what is really going on there.
@@elinars5638 I think it's just a way for big institutions to support the development of the compiler, while still allowing access to those who can't buy the license.
My first year at university (1987) doing Chemical Engineering for 1 term (I think) we had computing lectures on Pascal, because the Computing Science Dept. said this was the "best" language. Then in the last 2 weeks of the year we had a crash course in FORTRAN because that was what industry actually used. I think that variables I, J, K, L, M, N. are implicitly defined as integers, because these variables are used for loop counters.
More importantly those letters are commonly used in mathematical series notation to denote iterators and matrix elements. As such they would have been familiar to the mathematicians and scientists that FORTRAN was designed for back in the 50's. I cut my teeth on FORTRAN as a physical chemist in the 90's and 00's on big Silicon Graphics machines.
I was once told at uni that with punched cards the information is not actually stored on the cards at all. The cards are just used to make sure the holes stay in the right place. !!
I mean it's a bit more accurate to say the card is only there so you know where the holes aren't (though it would also be a pain to pick up without the card)
Thank you and in my freshman days at UMass in 1967 I programmed in FORTRAN, yes using punchcards and turned them over to the night shift Data General card shredder. We got the mangled cards and a green bar printout saying, run failed. Lol. It took weeks to get a simple program completed. Used slide rule too so yes I am a dinosaur. Enjoy the videos and look forward to the next. Cheers
Been there. But in 1986 for some lab work on a mechanism course. I still own a lot of blank punchcards. I use some of them for my notes. I also had in collection some slide rules.
I started programming when I first got a basic spectrum aged about 10. After school, I went to university to study a B.Eng in Electronics and Comp sci. (was about 1991) I was lucky enough to have a holiday job where I still work, and learned Assembler (Z8, 8052 etc) and straight C. They taught me C++ at uni and I remember the physics guy showing me Fortran. Anyone remember Zortech C? Graphics on an ms-dos screen.
On the FORTRAN side, there is one feature that is still unbeaten that is how it stores matrices. Fortran stores matrices as COLUMNS, not as rows. That may seem unintuitive, but think about it: you populate matrices by rows, say you read several data points at the same time from sensors, but then, all calculation is done over columns. Also, you are allowed to reference and pass subsets or dimensions as parameters. So if you have a 3x3x3 matrix, you may call determinant(M[1]) where M[1] is a 3x3 matrix itself. Very handy !
When I was younger, I used to hear a lot of jokes and warnings regarding Fortran. How old it is, how ugly the code is, how outdated. However, in many academic fields, especially those involving linear algebra, it's still a very common language. It received multiple updates over the years, and it is certainly possible to write nice code in it. I think it's mostly just bad rep from ancient (Fortran 77) code written by amateurs (scientists and the like). True, there is a lot of very ugly, old code written in Fortran. Some people still didn't receive the memo that you can use more than 6 characters for your variable names. But old C code is sometimes just as ugly. Doesn't have much to do with the language, rather than just the different priorities (readable, easily extendable code vs one-time-use, specific, performant code).
I've had to read what I believe was Fortran 77 during my PhD, and reimplement a part of the code into something more modern. I've no formal CS education, so that was a trip. The one presented here was miles more readable. Not having to interpret characters placed at specific columns helped a lot.
Later freeform Fortran versions (90, 95, 2003, 2008 and even 2018 - yes, Fortran is still being developed) look far nicer and don't throw that many wrenches against the programmer trying to write a performant, but readable code.
"Some say he actually returned from a function before evn calling it, others say he was the actually developer steve balmer was yelling about". You destroyed me with that line XD
Sad that IBM's PL/1 language was once again ignored - it was the 3rd major language of the System 360/370 era. I coded a lot of it from mid-70's to mid-80's. It is essentially what you'd get if COBOL and Fortran had a child... a language good for both scientific programming and business programming. In fact IBM's DB2 database is largely written in it as I understand it.
PL/1 was a great language. I shipped a compiler for PL/1 on System 38 in about 1983. The language had 2 big faults. A. one could convert almost any data type into another. B. The rules for precision of binary ("fixed" point) arithmetic were very strange and confused lots of people. ---One of the best PL/1 features was descriptors that allowed arrays and strings to be passed to subroutines without having to explicitly pass bounds and sizes. There were other PL style languages which extended the use of descriptors and which resulted in totally safe code with very little extra overhead (maybe 3 to 5%) with the right optimizer.
@@williamdavidwallace3904 I never considered the ability to essentially type cast any data type into another a fault - as long as you understood what you were going to get it was quite useful at times.
@@rhymereason3449 That "feature" produced too many programming bugs. I always occasionally checked all the low level PL/I compiler messages to ensure that I had not done an implicit cast that I did not want. I prefer specific syntax that makes the cast obvious like in C.
I can't wait. Personally I'm rooting hard for Fortran, although I think C++ will almost always win in this type of tests, mostly because 1. in C++ you can do some low-level things which aren't easily achievable in Fortran 2. I assume the C++ implementation have received much more attention, especially by C++ optimization gurus... this is a consequence of it being a much larger community. 3. C++ compilers have probably received orders of magnitude more in effort than Fortran ones, and hence might be better at optimizing code.
The Fortran compiler is good. The huge work to make efficient C++ compilers comes from C++ being a larger and more complex language. So muuuuch more alternatives to take care of.
I haven't watched yet, but based on my experience with FORTRAN and C++, I'm thinking FORTRAN might win, because all variables and functions are created in memory at compile time. I don't know if there is even a stack, but pretty sure there's no new, so no overhead for creating variables on the stack. Of course this is assuming that only doing numerical processing. Text handling is horrible in FORTRAN. Going to watch now. Nail biting.
@@axelBr1 Static/dynamic allocation doesn't matter much when it comes to computing speed. The big cost with dynamic memory is the alloc/release operations. So the loops should not allocate and release. When it comes to function parameters and return values, C++ will try to use registers whenever it can.
If keeping the implementation "faithful", you can't use many of the C++ features. I wrote an implementation that is about twice as fast as Dave's code, but it wouldn't be faithful. It uses memcpy to speed up first prime multiples (3,5,7,11), and a simple loop optimization for weeding out multiples of 3. By faithful, it is assumed you don't know any primes (though technically 2 is known in the example), though technically it could be argued that by the same logic as 2, you also know 3. Also, the use of SIMD instructions are not allowed, which negates some benefit of the language (and other languages with support for SIMD).
Fortran is still quite widely used in science. More so than C, let alone C++. Normally, if Python (or R) is too slow, you see people resorting to writing the critical piece in Fortran. And, oh boy, do we still have old Fortran programs still around that require very, very specific input. :-D
i remember trying to alter an old 77 file to use a namelist file for input to help some classmates with a tedious manual input for an astronomy course. i still have no idea (and google as well) if it was even possible.
I don't think this is universal. The statisticians I know use R for basic code and C++ (in R studio) for performance critical code. The cryptographers I know use C or C++ for performance critical code.
In the COBOL course which I assisted (1994) during university, the use of the COMPUTE command was strongly discouraged. Our professor would tell us that the use of COMPUTE command to solve simple math was like "dropping an atomic bomb to kill an ant". Therefore, I never used it.
This isn't really testing languages, but specific compilers. And you could disassemble whatever is the winner and use that as the entry for assembly language. You can always match any other language by assembly in that way (or in c++ using inline assembly).
When I was working on my maths degree in the 00s, I had to learn F77 because all the research going on in numerical analysis used it. I asked my prof once why we didn't use F95 and was told that it didn't do anything new they needed.
No Math was first I think. (Would love to hear trivia about it) Coders and mathematicians are both lazy when writing hundreds of terms per pages so synthesizing them in the most dense symbols, yet naturally and easily readable by a foreign eye was a common goal. Just like in science where a letter used in a revolutionary paper set a standard or simply being the first letter of a word that well describes it F : forces, d : derivate, v : vector etc... Alphabets from different languages being the simplest readable caracters, when we'll exaust ALL THE LETTERS in the world we'll easily come up with new symbols to use. Emoji might be ones like that since Unicode is the new world alphabet
Given that this problem is so heavily single threaded CPU bound performance is largely determined by the back end, which for Rust is LLVM. So you should expect basically the same performance as Clang compiled C or C++. Clang is, to my understanding, a little behind MSVC and GCC, and a bit more behind the Intel complier, in terms of codegen quality.
@@SimonBuchanNz well rust compiles to HIR and MIR before finally to "llvm-friendly-language", which allows more optimization opportunities for the Rust frontend before it reaches llvm.
Wouldn't a truly optimal compiler unroll such a deterministic algorithm and precompute the values into an array that the final executable just spits out repeatedly? I mean unless you hold back some critical value to prevent pre-calculating and enter it at runtime.
Wonderful to see the FORTRAN video has arrived! Great work! I'm not going to lie I'm a bit disappointed that my optimised object-oriented version didn't make the video but I can absolutely see why from a storytelling point of view :-)
Thanks again! It may even be the better in many ways, and does show up Fortran has been updated, but I wanted to show the original "flavors" from the "olden" days!
@@freezombie I already have thrown attention to the object-oriented version (according to the 53 likes of the comment quite a lot of people are interested in that version). But I guess most viewers have been surprised by how "modern" even procedural Fortran (no punch-cards involved) and that fortran is actually - within a its typical range of applications - a rather easy language. For your object-orientented (and generally very clean) version the surprise would have been even stronger.
I dont know, but there is a certain cleaning'ness in COBOL that i like. code that is declarative like that certainly looks easier to maintain and check the validity of. I mean, if it dint work it wouldn't still been around.
I do believe the only reason it's still around is because too much critical code has been written in cobol in the 60s, and we can't get rid of it now. At least it's not financially feasible. But I'm 22 years old and have never programmed in cobol so I'm not actually sure, just going off of what I've heard.
@@mananasi_ananas well, yeah. But there was also lots of code written in asm, many flavors of basic and others Yet cobol is the one monolith that seems to stand the test of time
I wasn't surprised at the closeness of the race because many of the primitives in COBOL are just longer mnemonics for underlying assembly constructs. As far as your super high speed race-offs it ofc matters what is the underlying CPU architecture. Since assembly was once just one-to-one correspondence to machine code, in older CPUs with simple pipeline stages for decoding the instruction bits via VLSI gate logic it would be tough to beat *well written & optimized* assembler on that kinds of CPU. Actually that is another varying factor that may or may not yield apples to apples in your tests. How optimized is the assembler code (in loops, initializing, passing variables)? Does it use registers or memory? What CPU? Single CPU or is the super fast mystery language spawning threads on multiple processors? stuff like that. Nah, I remain unconvinced for now.
Hi Dave, Great Video. After working for Micro Focus for 20 years, COBOL is a language I got to know rather well. COBOL has its uses and certainly the latest implementations provide almost every feature you could want. Is that a good thing? I leave that down to the individual to decide. Certainly the source code provided was somewhat verbose and modern COBOL could be written in a more streamlined form. As a matter of interest, which compiler did you use for the test? When I was a mainframe programmer, I preferred PL/1 - are you going to look at this too? Micro Focus has/had a PL/1 compiler when I worked for them.
Yes I agree, the COBOL code did look very dated, but maybe that is what they wanted to show. Morden COBOL is much more powerful and way less verbose focus. I still wonder if he use the Math functions in COBOL over the verbose old method (ADD -1 TO...)would the result been better?... oh yes on a side note ADD -1 is correct as processors can only add numbers ... mind blown
Hi Paddy, I always thought that the Micro Focus compiler converted the Cobol code to C and then compiled that. I think we would have to know a lot more about what Cobol (version and compiler) is being compared and what platform the comparison is being run on.
@@johnconcannon6677 For the comparison a recent version of gnucobol had been used. This appears to be an actual cobol-to-c transpiler. By contrast, gfortran is part of the GCC suite, but does never generate intermediate C code.
@@johnconcannon6677 Hi John, Not to my knowledge, well not on Windows anyway. The MF compiler can generate INT, GNT, MSIL, Java Byte Code and native object code. HTH. Paddy
@@paddycoleman1472 Thanks for the info Paddy. The last time I used Micro Focus Cobol was in the 1990's so I must assume that things have changed over time.
Remember good times working in COBOL in college. My carefully crafted code for my course assessment project chucked out over 10,000 error messages at compile time. Just because I missed out one full stop.
Thanks for the video and the opportunity to compare my programming efforts to others in your open source repo. At the end of this video, you mention your upcoming episode on the fastest computer languages and that the fastest are "not C/C++". That isn't quite fair to those languages, as the only reason for that is that no aficionados of those languages have adapted the techniques used in the fastest contributes to them. As I show in my contributions to your "Drag Race", writing fast versions of this benchmark is just a matter of the right techniques applied to any language that can emit efficient "native" machine code for the host CPU. The languages currently at the top of your "Drag Race" all do this to varying degrees of success, but you'll notice some common things among them: they all either produce their output CPU machine code through LLVM (the languages V, Crystal, Haskell - a back-end option, Julia, and Rust - contribuitions written by my cohort, Mike Barbar), through a C/C++ back end as more of a "transpiler" from one form of code to another as does the language Nim, or implement their own optimizing compiler from the ground up as in the language Chapel sponsored by Cray/HPE. This last way of writing one's own custom compiler is the hardest and generally needs a corporate sponsor in order to provide enough resources to do as in Chapel, and can even then only be moderately successful as in the language Go from Google. Languages that use LLVM as the back end are most successful when they manage to tune their output LLVM intermediate code and optimization options to best suit the LLVM native code generation process, with the most successful being Rust sponsored by Mozilla followed by Haskell,, Julia, etc. One of the fastest entries here is my Nim contribution using either the C back-end as submitted but also about as fast using the optional C++ back end. So, indirectly, the C/C++ languages are represented at the top of the chart, and there exists some Nim generated C/CPP files as in intermediate code step in compiling my Nim contribution, although they are hard to read by humans as they are computer generated with "mangled" names to automatically prevent name collisions and "boilerplate" code as automatically generated by the Nim compiler. To be fast for this benchmark, all that is required is extreme optimization of a very few very tight loops that do the marking of composite number representations which is only a tiny fraction of the overall code setting up those loops. My point is this: the C/C++ languages are in fact still as fast as the best and if the Nim language can transpile to fast code using them, then my techniques could also be used directly in C/C++ code. If I were to do such a thing, I would likely use C/C++ templates/"macros" to replace my Nim true hygenic Abstract Syntax Tree (AST) macros otherwise using the same techniques, and the resulting code would likely be at least as fast as that emitted by Nim; I am surprised that no C/C++ aficionados have yet done that in your "Race". The reason that I haven't done it myself is that I regard C/C++ code as ugly and difficult compared to writing code in Nim or other languages, and no longer work directly with C/C++. So the point needs to be made: There is more than just being able to write fast code in whatever language as there are all kinds of ways to be able to get speed as I demonstrate in your "Race" in the "top Fuel" class using an assortment of languages; code needs to be effective and elegant as well, written using modern paradigms that produce safe(r) and relatively easy-to-read-and-maintain code (if one is familiar with the languages and paradigms - such as functional programming). Note: I am not a proponent of "everything is an object" as used in mostly Object Oriented Programming (OOP) languages although I use that paradigm when it makes sense or when it is forced on me by the language. Just as one retired Saskatchewan native to another - formerly East of Davidson...
Go Riders! But I can't agree entirely. Unless and until someone demonstrates it by implementing a faster algorithm in C++, I could maintain the language is "too hard" to do it in, and that'd be a failure and limitation of the language! And see almost my lengthy argument about asm being the fastest.
My grandfather really did work in Cobol. He'll ask me what language I'm working in right now and invariably he'll say "Oh, I guess I don't know that one. We always used Cobol".
I taught Fortran at uni in late 70s/early 80s, and it didn't look anything like the structured Fortran here. GOTO might have been considered harmful, but we still hadn't adopted versions of Fortran that thought so! And writing Cobol was like speaking Shakespearian English!!!
Dave, I enjoyed your video but would like to point a couple of things out. Cobol on the mainframe is first compiled into Assembler and the compiler creates a machine code executable so it's speed will depend on the ability of the compiler to create efficient assembler code (and therefore machine code). Your Cobol example looks pretty horrible so I am not surprised that it doesn't perform so well. In the old days, C could out perform Cobol on the big IBM mainframes but, since the introduction of the new z/OS assembler commands, Cobol has got faster than C as the new compiler/binder takes full advantage of the (over 200) new assembler commands. I would like to ask you what platform(s) you are doing these tests on and what versions (and compilers) of the different languages you are using. Once again, thanks for your interesting vid and best wishes, John
@@DavesGarage No problem but I was also trying to point out that Cobol code doesn't run on any computer, the compiled code gets run which on the mainframe would be machine code. Same goes for PL/1, C or Fortran. If you are using GNU Cobol or Micro Focus Cobol on Windows or a Unix box then it gets translated to C which is then compiled. I cannot discuss your findings without knowing what I am supposed to be comparing. It's not a question of the Cobol code itself but the actual machine code that the compiler generates and gets run.
Fortran is still used today for very specific applications, primarily when doing massively complex simulations (such as simulating the motion of an entire galaxy for example). I find that really fascinating.
i had to learn fortran90 for a computational physics course, and i actually enjoyed it. covid hit after i finished that semester. it's also possible to split up the fortran90 code into separate pieces, and then make use of a makefile to link up their dependencies. at least, that was what i was taught in order to keep the code modular. it's also worth noting that it can be threaded to take advantage of multicore processors. dunno about cobol.
Hi Dave !! , I can't find the next videos after the E04, you said there are 3 faster programming languages , where can I find your next video after E04 ? thanks a lot !
My favourite feature of COBOL from my 80's programming days was the ability to write variable length records by using a level 88 redefine with a PIC X OCCURS DEPENDING UPON VARIABLE and writing that redefine having defined the maximum records lentgth with the 01 definition. This was Honeywell BULL DPS mainframe era back in the 80's. The days of no GDB or indeed any real debugging tools and your friend being able to do dump analysis by hand - fun times.
Enjoyed the video. I'm curious : what Cobol and Fortran compilers did you use? The only Fortran compiler I've found that is readily available (and at no cost), is GNU? Maybe the same for Cobol? Thanks
In this case gfortran has been used (the now rather old Version 7 for Ubuntu 18.04). A further free fortran compilers are form the LLV project ( flang). Intel and Nvidia provide free, but not open-source compilers (for the exact license conditions please look on the pages of the respective vendors). So no matter what operating system you use, there is a high-quality fortran compiler available without having to pay money. Concerning COBOL I just know GNUCobol as a free compiler (Note that in contrast to gfotran it is not part of the GCC Compiler suite, but still needs a C compiler for creating the executables. It also appears to be available on all relevant platforms (I have not tested it personally as I am not a Cobol programmer).
Hi Dave im looking some what within the early MS / DOS and Qdos but am looking at a' Fat 12 '...going by my own recolection would this be the ' File Allocation Table' ? Is this allowed open memory port storing .Store open available memory sert data in circumfrefce srent tables ,allocate ,available open readddy usable jeest RAM?
What's more interesting he said it was faster than assembly... Now that is kind of interesting. Personally I'd think that given unlimited development time and resources assembly would always outperform everything but possibly hand wrought machine code. The amount of work needed would however be kind of ridiculous which is why high level languages were developed in the first place. Also the whole idea behind languages such as Cobol are that they are to be decoupled from the underlaying tech as far as possible. The programmers should not have any need to know anything about how the computers actually work at a low level to write the programs they want. This might not be conductive to creating the most efficient code for a particular computer architecture, but that was never the idea.
@@blahorgaslisk7763 Regarding assembly - back in the day it was king of the hill, but it breaks down with modern CPUs. In order to optimize assembly language properly, you need to know the exact model of CPU that it will be running on, and performance can suffer greatly if you choose the wrong CPU target. The compilers are able to take advantage of special instructions on the processors that would be difficult to match in assembly language. Modern super-scaler CPUs can issue more than one instruction per cycle, under the proper conditions. That being said, knowledge of assembly language is crucial for proper tuning of higher-level languages, since you need to look at the assembly language that the compiler generates, and determine if it can be improved by changing the source code, with tricks such as manual loop unrolling, cache utilization improvements, or algorithmic improvements.
I am slightly confused. How the leader is not an assembly language? You can disassemble the code that leader provides and here it is. Or do I miss something?
I'd be curious to see how this would play out on a mobren Mainframe like a Z15 and the latest COBOL compiler from IBM (V 6.3) with a ARCH setting of 13 vs fortran and C++. Not that would be a software race on the the best hardware.
I don't know how the modern system Zs differ from the older 370EA and XA generations of system, but C used to run really badly on older systems because of some features you would normally expect to be in an ISA, but which were missing in the 370 ISA (I was using Amdahl mainframes at the time). System Z may well have filled in these omissions, because Linux is quite a big thing on current generation mainframes (such as the Linux One mainframe systems).
Considering how the bitfield is static and predetermined, I can't help but feel like it can be calculated at compile time if you just use the right attributes and modifiers in modern c++.
I started as an assembly language coder in the 70s with the pre-requisit of being able to read object code in hexi-decimal format working for Rockwell Intl. I then graduated to ForTran coding at IBM working as senior programmer and supervisor of various manufacturing engineering projects. I currently am self employed coding in PGI ForTan 99 on a Windows XP system developing and supporting manufacturing engineering applications used for post-processing CAD/CAM data to machine code data for various CNC macine tools in mostly aerospace industy. I also have personnaly met many key people in the computer and manufacturing business including John Backus the inventor of ForTran.
I seem to remember that Fortran was brilliant but not as a language, but because it made available the NAG library calls which actually did the real work.
One of my college profs was a former Navy programmer. The Navy had determined that their Cobol code was too slow and wanted it converted to assembly. My prof wasn't that good with assembly, so he decided to rewrite it in Fortran, do a compile with assembly and turn in that ode. He figured the Fortran compiler could optimize the code at least as well as he could. Plus, he could write a couple lines of Fortran, then get enough "debugged" assembly code to meet his daily quota, then go fishing and have a couple of beers. BTW, Cobol was allegedly self-documenting. You could buy a version of Cobol from MICROSOFT at one point. I think it was about $700 when you could just grab a copy of Python for nothing.
Back in 77, FORTRAN was the first computer language I learned, as a freshman in Electric Engineering. But it was not the the modern FORTRAN used in this episode, it was FORTRAN 66 with its three way IFs and line numbers. In punched cards, of course. In 78 I started to work as an intern and the first thing I had to learn was COBOL. And I must say that COBOL has not changed much since then.
Back in 78, structure in COBOL was not yet widespread since it was only added to COBOL in COBOL-74. Since then there has been object-oriented COBOL. Regardless, I also hated my summer job writing COBOL. And typical for the era, I moved to FORTRAN to pay my tuition fees, but was secretly in love with ALGOL derived languages (ALGOLW, ALGOL68, C, PASCAL etc.).
Bit of trivia for a nerd quiz: The FORTRAN version before 66 was IV (read as four but never written as a digit). Waterloo Fortran II was abbreviated to WATFOR which was a nice acronym, and was followed by WATFIV which was FORTRAN IV and not five as many ppl imagined. Perdue University Fast Fortran was PUFFT which always made me think of magic dragons...
Heck, I wanna know how Rust and Go places amongst the 40 y/o (matured/evolved) langauges.... Please, don't wait too long to publish ep05 Dave. And thank you so much for the series. REAL FUN TO WATCH
@@orkhepaj Your guess is as good as mine. I wonder if it's some functional language like Haskell or maybe ADA / some LISP variant. I would be surprised because those languages favor safety over speed, but optimizers can do funny things.
I'm going to take a Wild Ass Guess and predict it's FORTH. Although it's interpreted, it seems to be amazingly fast for certain types of operations: not a lot of runtime overhead?
Would love to see you come back to this series. I'm particularly curious about languages that use JVM like Scala and Clojure, how they compare to Java, and perhaps how they compare to themselves when programmed using OOP vs FP, as well as on JVM vs compiled to native executables, which I know you can do at least with Scala.
Reading COBOL code is just like reading an angry person's text message
Awesome and true :-)
I know from experience that the verbosity of COBOL is helpful if you are a maintenance programmer trying to figure out what the code was meant to do, especially if the program abends at 3 AM and you have to get the batch program working before 4 am.
For real, cobol syntax will def make you want to walk into traffic 🤣
True, but if code is properly commented, you don't have this problem in any language
In my 41 year experience, "properly commented" code was extremely rare and most of it was spaghettified.
Another old joke:
Q: How are COBOL and pregnancy similar?
A: Miss one period and you're in trouble.
Hahah that funny man love it 🤣
Nerd humour is the best. A bit like the old 10 types of people etc...
In C++ your colon takes a pounding.
I would be sure to shout ABORT if anyone started to code in COBOL where I could see it. And though I could understand people being iffy about abortions in a pregnancy situation, I would expect someone to really try to justify their use of COBOL.
@@markpitts5194 now try to say there are 2 different types of people but expressed in base pi.
I had to take a COBOL course in college... we had to turn our programs in on a floppy.
In 2009.
oof
Hopefully you didn't have to write it on a TRS-80 model 2.
I took COBOL in the spring of 1990. The instructor was an amateur paleontologist.
Did the professor 3D-print the "Save" icon?
Not punch cards?
Fortran is still a popular language and receives updates. I just downloaded the AOCC Flang compiler, optimized for Zen 3 processors and the AMD Epyc platform specifically.
There were 5 things in that last sentence I've never heard of.
Fortran gang!!! It is the best if you have a lot of linear algebra to do.
@@KennethSorling Okay so AOCC, Flang, Zen 3, and Epyc... what's the 5th?
G-d bless FORTRAN
@@JeoshuaCollins your mom
I've known FORTRAN since 2011 (Atmospheric Sciences), it is still a main language used in computer weather modeling.
This ^^^ is the only language I use primarily (77, 90/95 and 2003) but I had to learn C++ for my job at NOAA. Funny because I know a dozen other languages, but nothing beats the simplicity and speed of C and Fortran.
@@douglasmarch6601what about rust and zig?😅
@@heruhdayget out
Fortran is the bedrock of computational physics. It was my first programming language. Ever. In 2016!
Mostly because no one wants to port over huge legacy codes to modern languages
one comment in COBOL's defense - this is not what it was designed for. The fact that it can crunch these numbers at a not-too-shabby rate is pretty cool, but it was designed as a "Business Oriented Language" (it's right there in the name).
Having said that, as a (former) COBOLer, if/when you do find that your code is not performing, you find the "hot-spot(s)" and you rejig & optimise them. Usually the Pareto (80/20) principle applies, where 80% (ish) of the work is being done in 20% (ish) of the code - so you find that 20% and work it over and get a bump in performance - perhaps even coding it in another language (assembler being popular ;) - and then call that from within the COBOL code.
As someone that has coded in FORTRAN 77 and Fortran 95 (yes, they changed the case of the language name), I can say they are very different beasts. The former is close to the original versions, with strict column formatting to fit on punch cards. The latter is heavily influenced by C, and could be arguably considered a completely new language.
The most recent verion of Fortran is actually 2018. The version shown uses a bit of Fortran 2008 but does likely work with most Fortran
2003 compilers.
Still FORTRAN77 code can still be compiled with no or very little chsnges on modern compilers.
Like for C++ there is a policy of breaking no existing standard-conforming code.
@@johannweber5185 In my experience, the most used standard is fortran90 for the super high performance things
@@johannweber5185 C++ has started to deprecate and remove some very old unpopular features that get in the way of better new stuff. Though it is a slow and well documented process to give folks a fighting chance at patching, and compilers can still be set to use the old standard(or an old compiler used), but a compiler set on c++20 may not work with 30 year old code. Of course after initial file compiling they can all [including fortran] be stuffed through the same linkers.
Worst Fortran ever is WATFIV from University of Waterloo. Canadian stuff is to be absolutely avoided.
Fortran77! I studied Fortran IV in 1967 as my first programming language.
So "implicit none" kills God. Got it.
No, it just makes God undefined...
@@ivoivanov7407lol Good point.
You can still declare
REAL GOD
@@johnburr9463 I guess we should say that fortran is a christian language. It *_implies_* that god is real.
You can also declare God as not real…
Cobol is actually quite an important skill because lots of businesses are looking for people to maintain their legacy code
No wonder you felt at home in Fortran. Fortran has constantly been updated so that it now looks like any modern Algol-descendent language. The Fortran code you showed in the video has very little to do with the FORTRAN IV language I learned programming with in the mid 70s. No blocks, no character variables, no dynamic memory allocation, fixed colums (statements must be in columns 7 to 72).
One Fortran-motivated habit I still have is that in most languages I use my integer variables still start with letters i to n.
The Fortran language is specifically designed to allow the compiler to do a better job of optimization. So a program that conforms to the standard but that is not specifically manually optimized is more likely to be better optimized than in other compiled languages. However, you can very often manually optimize your code in most any language and achieve close to maximum possible performance with very little difference between languages that are similarly manually optimized. Fortran has increased its support for object orientation dramatically over the years however and is increasingly challenging for compilers.
That was great. I've not heard the God is real joke for at least 20 years!
Really looking forwards to the top 5 episode!
1:20 "I, J, K, L, J, N". -Can you explain the 'double J' part of the joke?-
Watched the whole thing; he reads out an "M" so must be a typo.
Devil is also real, but not my_arse...
Why do programmers keep smoking despite all the warnings on the packages??
They are only warnings. No errors. Programmers ignore those.
Fortran is a language of now, the best to do linear algebra in by far. Mostly because of the big daddy library of them all, glorious LAPACK. When I did linear algebra in C++ I still called back into Fortran. FYI, I am 30.
Alpacka isbstill the best. The clapack is a a nice wrapper for gcc
Lies
If you want something built for linear algebra try MIT Julia. If you want speed then fortran SIMD for Intel knights landing etc... Is still decent. I think the programmer, the compiler, and the CPU architecture are all necessary to get fast fortran code. Someone can write slow fortran code by accident but that doesn't prove fortran is slow exactly.
How did you call back into Fortran?
@@oraz. you do an extern function declaration in the Fortran module. G++ and gfortan has decent documentation on it. I think lots of prebuilt wrappers exist as well.
Why should C++ be discussed in high school health class?
- Because every other line has an STD
I've found that different language libraries can have wildly different efficiencies in their "get_current_time" functions, and it can be a heavyweight call in any case (relative to any purely CPU-based calculation). As such, by putting the time function in every pass of your loop as you do, you may be measuring the speed/slowness of the "time" function more than your "calculate" pass.
I'd suggest instead doing an initial "get time", then a fixed (large) number of your calculation passes, then a final "get time" to determine the total elapsed time and dividing number_of_passes by elapsed_time.
Hallo, your point is very interesting. So I checked how many times gfortran was able to ask for the time within 5 seconds.
On a AMD Ryzen 7 3700Xmachine it was 174 million times. So I do not think that the computational effort of determining the time has a significant impact on the overal performance.
Did you check to see how many of those time calls were cached by the CPU if they were less than 1ms apart? In this example the time function was called every 4ms. The result was never cached and the cache was possibly flushed by the OS or by other instructions. It would be better to run headless without all these other threads running as part of the kernel.
@@Ureallydontknow I do not know wheter this is an answer, but the call foes not exactly call for the wallclock time but for some counter that then is divided by the ticks per second. I guess that are only a feq assembler instructions and directly accesses the CPUbut was surprised myself about the speed.
I would have to check but "get time" can cause a kernel trap which can reschedule your thread early.
@@PEGuyMadison Interesting, but unfortunately I do not know much about what actually happens at the gernel leven when calling SYSTEM_CLOCK (the subroutine actually used). Well, at least the measured time apears to be consistent with the elapsed wall-clock time.
Did the next episode of this ever come out? I don't see it on your channel
Finale is where?
Some say The Stack materialized out of the result of the first punch card ever executed
Love this series!
Am I the only who thinks COBOL's syntax looks neat and interesting?
I feel it's more "repulsive and obscene" personally
@@theRPGmaster I probably wouldn't enjoy working with it lol but it's an interesting take
written in lower case like it shld be its much easier to to work with
if you've spent time working in it, you're right it does become quite comfortable. We did COBOL intensively for years at uni, and then I started my first full-time job as a C programmer. I've got say, that C code did look and feel to be too terse and unreadable initially. The world went from 'write code in a way that can be read and understood easily by someone else) over to a world of 'let's see how much of a hero we can be in terms of packing as much as possible into a single line of unreadable code'!
@@GalaxyCat001 Aaagh!
You referred to the "historical oddity" of Fortran using .lt. and .gt. for < and > respectively. I suspect it's because when Fortran was developed there were no < and > signs on most if not all keyboards. Way back then they probably only had letters, numbers, and a few punctuation symbols on their keyboards.
Fun video, but you have to choose your drag race if you want your language to shine.
I would choose or at least value the languages for the following sweet spots:
C/C++ for systems programming: operating systems, compilers, dynamic memory usage scenarios (heaps, pointers).
Fortran for highly performant mathematical programming using arrays (matrices)
Cobol for efficient business data processing. Think fixed format records, requiring business logic and transactions to manage them.
TLDR;
Speaking as someone who has professionally programmed in Fortran, Cobol and C#. Two things have happened over the last 30-40 years which has undermined Cobol and Fortran in the mainstream: object orientation and non fixed format data processing (XML etc). Back in the 1960s and 70s mainframe business data was inputted on cards (one per line) in fixed format, read by card readers. Files were organised as a sequence of records not bytes. Cobol (and Fortran) were designed to process data in that format. Cobol supports fixed format record structures (which would all be pulled in from source code include files in a library) that may have been generated and managed through data dictionary products and processes. Such that a single read IO on a file on disk would read in sometimes hundreds of fields of data for many records straight into the memory buffer of the file description format in the program. Which would then be immediately available for processing by the Cobol business logic. No tedious and very compute expensive XML parsing and deserialisation (yada yada) required like in OO languages. Cobol was far more efficient in compute, and needed to be because of the cpus of that time. Also, although Cobol is verbose (literally), I would contend that there are more function points per 1000 lines of code than your typical C#.
This doesn't mean I like Cobol, I don't. It's not fun if you come from a Comp Sci background. I would never want to write a Cobol sort algorithm, or parse free format data or XML in Cobol. Just wasn't designed for it. Surely, you ask, businesses that still depend on Cobol (and it's the largest ones that do) need efficient sorting, so how does that work? External sorting routines of flat file data using sort keys etc were executed before the Cobol program started. Tools were SYNCSORT, DFSORT on the mainframe. These tools were highly optimised for IO and compute using virtual page management fixing and other tricks.
Fortran on the other hand, by design, is highly performant for array processing. All the most optimised numerical libraries are (or at least were) written in Fortran. Converting them to C/C++ is likely to be less efficient, at least in the default configuration. One reason is the native support of complex numbers. The other big reason is that subroutine parameters are assumed to not overlap each other in memory. In C/C++ any two function parameters are assumed that they might overlap in memory (unless you tell the compiler otherwise). That's the assembler view of the world but without bringing the knowledge of the programmer to the machine code. Why is that unperformant? When the C/C++ compiler generates the machine code for manipulations of multiple passed in arrays, it sometimes has to re-fetch from memory the first array parameter because an update was done to the other array parameter and the compiler doesn't know the first one hasn't been altered. So it can't retain it in a register. That *kills* array performance when you have multiple arrays being processed in an inner loop. The fix in C/C++, I think, is to tell the compiler that the parameters are not aliases of the same address (if it supports it, and the programmer knows about this stuff).
C/C++ is more like assembler, you can do anything but you might blow off your leg in the process. You have to be much more sophisticated to get the same performance and consequently stability when trying to play in the other languages areas of strength.
Old style Fortran (Fortran 66), was a bit dangerous because it was white space insensitive, so if you missed out critical tokens the statement would mean something quite different but still compile. For example, the beginning statement of a FOR loop (called a DO loop):
DO 1000 I=1,200 to repeat a section of code ending at line 1000 iterating variable I to 200 could be accidentally written as DO 1000 I=1.200 which with the blanks stripped out was an assignment statement of the value 1.2 to an implicitly typed REAL variable DO1000I rather than a loop declaration! Apocryphally, this "." instead of "," caused a rocket to crash.
Fortran 66 didn't have string variables, you had to store string constants in INT arrays, if you had to use them, and you outputted them as Hollerith literals in FORMAT statements. String concatenation? Forget it. The solutions were not pretty and were not very reliable.
So drag racing such different languages is kind of pointless. Eratosthenes sieve is not going to be the best fit. Fun though it is.
This surely took a while. But I'm glad it's finally here.
Strange, usually to calculate prime numbers it takes a for...
i'm sorry i'm leaving.
Some friendly nitpicking on Fortran :)
At 10:15 the video says "there's no equivalent of a "main" function - it appears that program flow starts at the top of the file and flows on down"
In fact there is an equivalent to a "main" method, and that is the "program primes_fortran" statement at line 1 of code. That is the entry point to the code.
10:40 "the contains keyword indicates where the subroutine and functions begin". Not wrong, but not the whole story: you could have put all functions and subroutines outside of the "program primes_fortran" (eg they could be in other files), in which case you don't need to use "contains". Using "contains" makes the corresponding functions internal, and allows more checks to be done at compile-time (eg that the type of passed arguments is correct) and gives some extra functionalities (eg you can recover an array's size and shape with size()/shape() build-in methods; otherwise you explicitely have to pass the dimensions as further parameters).
15:21 "Fortran doesn't use a conventional less than and greater than symbols". Starting from Fortran 90 it is totally possible to use =, ==, /= for comparisons. BTW, /= is used for "different from" (NOT 'divide by'!) because "!" is used to indicate comments; a small quirk is that for comparing boolean values you shouldn't use == and /= but .eqv. and .neqv. (equivalent/not equivalent). ASAIK boolean operators were kept separate from standard ones because they are assigned higher priority.
going completely on a tangent here but some more fun (?) facts (not about fortran):
note that /= is used in common lisp too (and common lisp (and of course several other lisps, and perl) has (have) similar quirks re: different operators for different types).
other languages do use other non-standard symbols for not equals too - like ~= (lua, matlab) and (sql apparently... i'm pretty sure other languages have this one too?)
as for different priorities (precedences), this can be seen e.g. in ruby too, like `&&` and `||` vs `and` and `or` (`and` and `or` have lower precedence - basically, pretend they're keywords)
For your second point, that was probably the case in a legacy version of Fortran you've worked with, but it isn't the case on modern fortran. You can create an explicit interface for subroutines and functions using a module. In a separate module file, you can decalre subroutines and functions and then call them in your main program by using the USE keyword followed by the module name at the top of the driver program. This use association provides an interface that allows dimensions of arrays to be passed to external procedures stored in the module. You can do all the same wonderful things like passing in implicit shape arrays and getting compiler error checks without the ugliness of having all the procedures tacked on at the bottom of the main program.
No offense to Mr. Van Bakel, but this was his first COBOL program (as he stated in the notes for the code). I've been slinging code for 40 years with about 30 working in COBOL. Most of that has been in processing huge volumes of transactions, and one of my focuses has been efficiency. I'd have written that very differently. There are optimization tricks I'd have used, such as avoiding COMP-3 and coding the core loop in line (since PERFORM does a jump).
Now we need a code-off to get the most efficient versions for each language, and THEN submit those for comparison... Let the games begin
when is the 'top 5 fastest languages' video coming out?
I'd like to notice that the Github repository also contains an object-oriented Fortran proram that shows a more modern Fortran code (the performance is similar or slightly faster than the shown version).
It’s also worth noting that there are far faster Fortran compilers out there (but not free), such as Intel’s FORTRAN compiler.
actually Intel's fortran compiler can be now downloaded for free (for non commercial use, I think)
@@luckyluckydog123 Intel's OneAPI licensing is a bit confusing. It says that it is free for all purposes, and a license is not required to download, install, or use it, but commercial licenses and even academic ones can be still requested and issued. I did not have the time to figure out what is really going on there.
@@elinars5638 I think it's just a way for big institutions to support the development of the compiler, while still allowing access to those who can't buy the license.
@@elinars5638 Those licenses give you access to premium support.
My first year at university (1987) doing Chemical Engineering for 1 term (I think) we had computing lectures on Pascal, because the Computing Science Dept. said this was the "best" language. Then in the last 2 weeks of the year we had a crash course in FORTRAN because that was what industry actually used.
I think that variables I, J, K, L, M, N. are implicitly defined as integers, because these variables are used for loop counters.
More importantly those letters are commonly used in mathematical series notation to denote iterators and matrix elements. As such they would have been familiar to the mathematicians and scientists that FORTRAN was designed for back in the 50's. I cut my teeth on FORTRAN as a physical chemist in the 90's and 00's on big Silicon Graphics machines.
I was once told at uni that with punched cards the information is not actually stored on the cards at all.
The cards are just used to make sure the holes stay in the right place. !!
That is technically correct, the best kind of correct. Also, hilarious!
I mean it's a bit more accurate to say the card is only there so you know where the holes aren't (though it would also be a pain to pick up without the card)
So are you ever going to release the video that shows the fastest languages?
Thank you and in my freshman days at UMass in 1967 I programmed in FORTRAN, yes using punchcards and turned them over to the night shift Data General card shredder. We got the mangled cards and a green bar printout saying, run failed. Lol. It took weeks to get a simple program completed. Used slide rule too so yes I am a dinosaur. Enjoy the videos and look forward to the next. Cheers
The slide rule -- the add-a-log computer
Been there. But in 1986 for some lab work on a mechanism course. I still own a lot of blank punchcards. I use some of them for my notes. I also had in collection some slide rules.
Current junior year UMass CS major, how funny that we should cross paths!
Wow, older than me! Graduated in 76.
I started programming when I first got a basic spectrum aged about 10. After school, I went to university to study a B.Eng in Electronics and Comp sci. (was about 1991) I was lucky enough to have a holiday job where I still work, and learned Assembler (Z8, 8052 etc) and straight C. They taught me C++ at uni and I remember the physics guy showing me Fortran. Anyone remember Zortech C? Graphics on an ms-dos screen.
On the FORTRAN side, there is one feature that is still unbeaten that is how it stores matrices. Fortran stores matrices as COLUMNS, not as rows. That may seem unintuitive, but think about it: you populate matrices by rows, say you read several data points at the same time from sensors, but then, all calculation is done over columns.
Also, you are allowed to reference and pass subsets or dimensions as parameters. So if you have a 3x3x3 matrix, you may call determinant(M[1]) where M[1] is a 3x3 matrix itself. Very handy !
Come on Dave it's been 6 months since a new episode! You can't keep us in suspense for this long!
still waiting :(
This series is one of my favorites on UA-cam. Really like the short dives into each language.
When I was younger, I used to hear a lot of jokes and warnings regarding Fortran. How old it is, how ugly the code is, how outdated.
However, in many academic fields, especially those involving linear algebra, it's still a very common language. It received multiple updates over the years, and it is certainly possible to write nice code in it. I think it's mostly just bad rep from ancient (Fortran 77) code written by amateurs (scientists and the like). True, there is a lot of very ugly, old code written in Fortran. Some people still didn't receive the memo that you can use more than 6 characters for your variable names. But old C code is sometimes just as ugly. Doesn't have much to do with the language, rather than just the different priorities (readable, easily extendable code vs one-time-use, specific, performant code).
I've had to read what I believe was Fortran 77 during my PhD, and reimplement a part of the code into something more modern. I've no formal CS education, so that was a trip. The one presented here was miles more readable. Not having to interpret characters placed at specific columns helped a lot.
Later freeform Fortran versions (90, 95, 2003, 2008 and even 2018 - yes, Fortran is still being developed) look far nicer and don't throw that many wrenches against the programmer trying to write a performant, but readable code.
"Some say he actually returned from a function before evn calling it, others say he was the actually developer steve balmer was yelling about". You destroyed me with that line XD
Can't wait for the next episode!
I think the engine revving sound is too loud compared to the rest of the video.
Especially if you are watching at work.
Sad that IBM's PL/1 language was once again ignored - it was the 3rd major language of the System 360/370 era. I coded a lot of it from mid-70's to mid-80's. It is essentially what you'd get if COBOL and Fortran had a child... a language good for both scientific programming and business programming. In fact IBM's DB2 database is largely written in it as I understand it.
PL/1 was a great language. I shipped a compiler for PL/1 on System 38 in about 1983. The language had 2 big faults. A. one could convert almost any data type into another. B. The rules for precision of binary ("fixed" point) arithmetic were very strange and confused lots of people. ---One of the best PL/1 features was descriptors that allowed arrays and strings to be passed to subroutines without having to explicitly pass bounds and sizes. There were other PL style languages which extended the use of descriptors and which resulted in totally safe code with very little extra overhead (maybe 3 to 5%) with the right optimizer.
@@williamdavidwallace3904 I never considered the ability to essentially type cast any data type into another a fault - as long as you understood what you were going to get it was quite useful at times.
@@rhymereason3449 That "feature" produced too many programming bugs. I always occasionally checked all the low level PL/I compiler messages to ensure that I had not done an implicit cast that I did not want. I prefer specific syntax that makes the cast obvious like in C.
My first college programming course was Fortran on punch cards. Good times.
Good to see they're finally moving past punch cards!
My course used mark sense cards - pencil marks rather than holes. I used to pass a room where some students used keyboards and I would sigh with envy.
I can't wait. Personally I'm rooting hard for Fortran, although I think C++ will almost always win in this type of tests, mostly because
1. in C++ you can do some low-level things which aren't easily achievable in Fortran
2. I assume the C++ implementation have received much more attention, especially by C++ optimization gurus... this is a consequence of it being a much larger community.
3. C++ compilers have probably received orders of magnitude more in effort than Fortran ones, and hence might be better at optimizing code.
Numbercrunching libs are still in Fortran. Also IBM has been improving the (paid) fortran compiler steadily. I think fortran will win.
The Fortran compiler is good.
The huge work to make efficient C++ compilers comes from C++ being a larger and more complex language. So muuuuch more alternatives to take care of.
I haven't watched yet, but based on my experience with FORTRAN and C++, I'm thinking FORTRAN might win, because all variables and functions are created in memory at compile time. I don't know if there is even a stack, but pretty sure there's no new, so no overhead for creating variables on the stack. Of course this is assuming that only doing numerical processing. Text handling is horrible in FORTRAN. Going to watch now. Nail biting.
@@axelBr1 Static/dynamic allocation doesn't matter much when it comes to computing speed.
The big cost with dynamic memory is the alloc/release operations. So the loops should not allocate and release.
When it comes to function parameters and return values, C++ will try to use registers whenever it can.
If keeping the implementation "faithful", you can't use many of the C++ features. I wrote an implementation that is about twice as fast as Dave's code, but it wouldn't be faithful. It uses memcpy to speed up first prime multiples (3,5,7,11), and a simple loop optimization for weeding out multiples of 3. By faithful, it is assumed you don't know any primes (though technically 2 is known in the example), though technically it could be argued that by the same logic as 2, you also know 3. Also, the use of SIMD instructions are not allowed, which negates some benefit of the language (and other languages with support for SIMD).
Fortran is still quite widely used in science. More so than C, let alone C++. Normally, if Python (or R) is too slow, you see people resorting to writing the critical piece in Fortran.
And, oh boy, do we still have old Fortran programs still around that require very, very specific input. :-D
C and Fortran index arrays in different orders. Always loved that when hopping in-between them.
I feel like Julia will change that
i remember trying to alter an old 77 file to use a namelist file for input to help some classmates with a tedious manual input for an astronomy course. i still have no idea (and google as well) if it was even possible.
I don't think this is universal. The statisticians I know use R for basic code and C++ (in R studio) for performance critical code.
The cryptographers I know use C or C++ for performance critical code.
@@gamekiller0123 Disclaimer: it's not. There are many libraries to do what Fortran does, but in a higher level language.
In the COBOL course which I assisted (1994) during university, the use of the COMPUTE command was strongly discouraged. Our professor would tell us that the use of COMPUTE command to solve simple math was like "dropping an atomic bomb to kill an ant". Therefore, I never used it.
FORTRAN is not that retro though. The latest FORTRAN standard is 2018, and it is still used, perhaps as much as C++, in HPC and scientific computing.
As much as c++?
I'd say more, _especially_ once one considers that Numpy routines are backended in Fortran.
@@jithintc4200 In my work, I actually encounter FORTRAN code more often than C++.
Well, I mean the last big revision to Cobol - v6 was in 2017. So it still sees updates too.
Did Dave ever reveal what language did the 4000 passes or what the top 3 were?
This isn't really testing languages, but specific compilers. And you could disassemble whatever is the winner and use that as the entry for assembly language. You can always match any other language by assembly in that way (or in c++ using inline assembly).
He already adressed that argument in a previous video.
@@Ruhrpottpatriot All for the good of entertainment...not science.
When I was working on my maths degree in the 00s, I had to learn F77 because all the research going on in numerical analysis used it. I asked my prof once why we didn't use F95 and was told that it didn't do anything new they needed.
Way to keep up with the promise of the BIG reveal. :)
Looking forward to more of this series. The mini dives into each language are awesome.
Is Fortran the reason why by default we call the for loop variable "i"?
I would say this tradition comes from math.
No Math was first I think. (Would love to hear trivia about it)
Coders and mathematicians are both lazy when writing hundreds of terms per pages so synthesizing them in the most dense symbols, yet naturally and easily readable by a foreign eye was a common goal.
Just like in science where a letter used in a revolutionary paper set a standard or simply being the first letter of a word that well describes it F : forces, d : derivate, v : vector etc...
Alphabets from different languages being the simplest readable caracters, when we'll exaust ALL THE LETTERS in the world we'll easily come up with new symbols to use. Emoji might be ones like that since Unicode is the new world alphabet
No. "i" is short for "inc" which is short for "increment". So we call it "i" because the sole function of that variable is to "increment"
@@danielskinner1796 "i" is short for many words, but i,j,k have traditionally been used as indexing variables in math...
@@danielskinner1796 I always thought it was "i" for "iteration". But then I never really bothered to look it up.
Curious to see how the Rust vs Swift episode will go.
It's my suspicion that Rust is #1 position right now.
@@WarrenGarabrandt That's my guess.
Given that this problem is so heavily single threaded CPU bound performance is largely determined by the back end, which for Rust is LLVM. So you should expect basically the same performance as Clang compiled C or C++. Clang is, to my understanding, a little behind MSVC and GCC, and a bit more behind the Intel complier, in terms of codegen quality.
@@WarrenGarabrandt Naaa, he's trolling us, its gunna be asm
@@SimonBuchanNz well rust compiles to HIR and MIR before finally to "llvm-friendly-language", which allows more optimization opportunities for the Rust frontend before it reaches llvm.
I haven’t seen the top 5 languages yet. When did you release it?
I see what you did there with The Stack - Top Gear reference.
70 languages...heck. Loving it Garage Dave.
Wouldn't a truly optimal compiler unroll such a deterministic algorithm and precompute the values into an array that the final executable just spits out repeatedly? I mean unless you hold back some critical value to prevent pre-calculating and enter it at runtime.
Wonderful to see the FORTRAN video has arrived! Great work!
I'm not going to lie I'm a bit disappointed that my optimised object-oriented version didn't make the video but I can absolutely see why from a storytelling point of view :-)
Thanks again! It may even be the better in many ways, and does show up Fortran has been updated, but I wanted to show the original "flavors" from the "olden" days!
@@DavesGarage Absolutely! Makes me wonder if we shouldn’t have tried to write a fixed-width F77 version
@@freezombie I already have thrown attention to the object-oriented version (according to the 53 likes of the comment quite a lot of people are interested in that version). But I guess most viewers have been surprised by how "modern" even procedural Fortran (no punch-cards involved) and that fortran is actually - within a its typical range of applications - a rather easy language.
For your object-orientented (and generally very clean) version the surprise would have been even stronger.
where is the next episode?
I dont know, but there is a certain cleaning'ness in COBOL that i like.
code that is declarative like that certainly looks easier to maintain and check the validity of.
I mean, if it dint work it wouldn't still been around.
I do believe the only reason it's still around is because too much critical code has been written in cobol in the 60s, and we can't get rid of it now. At least it's not financially feasible. But I'm 22 years old and have never programmed in cobol so I'm not actually sure, just going off of what I've heard.
@@mananasi_ananas well, yeah. But there was also lots of code written in asm, many flavors of basic and others
Yet cobol is the one monolith that seems to stand the test of time
I wasn't surprised at the closeness of the race because many of the primitives in COBOL are just longer mnemonics for underlying assembly constructs. As far as your super high speed race-offs it ofc matters what is the underlying CPU architecture. Since assembly was once just one-to-one correspondence to machine code, in older CPUs with simple pipeline stages for decoding the instruction bits via VLSI gate logic it would be tough to beat *well written & optimized* assembler on that kinds of CPU. Actually that is another varying factor that may or may not yield apples to apples in your tests. How optimized is the assembler code (in loops, initializing, passing variables)? Does it use registers or memory? What CPU? Single CPU or is the super fast mystery language spawning threads on multiple processors? stuff like that. Nah, I remain unconvinced for now.
Hi Dave, Great Video. After working for Micro Focus for 20 years, COBOL is a language I got to know rather well. COBOL has its uses and certainly the latest implementations provide almost every feature you could want. Is that a good thing? I leave that down to the individual to decide. Certainly the source code provided was somewhat verbose and modern COBOL could be written in a more streamlined form. As a matter of interest, which compiler did you use for the test? When I was a mainframe programmer, I preferred PL/1 - are you going to look at this too? Micro Focus has/had a PL/1 compiler when I worked for them.
Yes I agree, the COBOL code did look very dated, but maybe that is what they wanted to show. Morden COBOL is much more powerful and way less verbose focus. I still wonder if he use the Math functions in COBOL over the verbose old method (ADD -1 TO...)would the result been better?... oh yes on a side note ADD -1 is correct as processors can only add numbers ... mind blown
Hi Paddy, I always thought that the Micro Focus compiler converted the Cobol code to C and then compiled that. I think we would have to know a lot more about what Cobol (version and compiler) is being compared and what platform the comparison is being run on.
@@johnconcannon6677 For the comparison a recent version of gnucobol had been used. This appears to be an actual cobol-to-c transpiler. By contrast, gfortran is part of the GCC suite, but does never generate intermediate C code.
@@johnconcannon6677 Hi John, Not to my knowledge, well not on Windows anyway. The MF compiler can generate INT, GNT, MSIL, Java Byte Code and native object code. HTH. Paddy
@@paddycoleman1472 Thanks for the info Paddy. The last time I used Micro Focus Cobol was in the 1990's so I must assume that things have changed over time.
Remember good times working in COBOL in college. My carefully crafted code for my course assessment project chucked out over 10,000 error messages at compile time. Just because I missed out one full stop.
Thanks for the video and the opportunity to compare my programming efforts to others in your open source repo.
At the end of this video, you mention your upcoming episode on the fastest computer languages and that the fastest are "not C/C++". That isn't quite fair to those languages, as the only reason for that is that no aficionados of those languages have adapted the techniques used in the fastest contributes to them. As I show in my contributions to your "Drag Race", writing fast versions of this benchmark is just a matter of the right techniques applied to any language that can emit efficient "native" machine code for the host CPU.
The languages currently at the top of your "Drag Race" all do this to varying degrees of success, but you'll notice some common things among them: they all either produce their output CPU machine code through LLVM (the languages V, Crystal, Haskell - a back-end option, Julia, and Rust - contribuitions written by my cohort, Mike Barbar), through a C/C++ back end as more of a "transpiler" from one form of code to another as does the language Nim, or implement their own optimizing compiler from the ground up as in the language Chapel sponsored by Cray/HPE. This last way of writing one's own custom compiler is the hardest and generally needs a corporate sponsor in order to provide enough resources to do as in Chapel, and can even then only be moderately successful as in the language Go from Google. Languages that use LLVM as the back end are most successful when they manage to tune their output LLVM intermediate code and optimization options to best suit the LLVM native code generation process, with the most successful being Rust sponsored by Mozilla followed by Haskell,, Julia, etc. One of the fastest entries here is my Nim contribution using either the C back-end as submitted but also about as fast using the optional C++ back end.
So, indirectly, the C/C++ languages are represented at the top of the chart, and there exists some Nim generated C/CPP files as in intermediate code step in compiling my Nim contribution, although they are hard to read by humans as they are computer generated with "mangled" names to automatically prevent name collisions and "boilerplate" code as automatically generated by the Nim compiler. To be fast for this benchmark, all that is required is extreme optimization of a very few very tight loops that do the marking of composite number representations which is only a tiny fraction of the overall code setting up those loops.
My point is this: the C/C++ languages are in fact still as fast as the best and if the Nim language can transpile to fast code using them, then my techniques could also be used directly in C/C++ code. If I were to do such a thing, I would likely use C/C++ templates/"macros" to replace my Nim true hygenic Abstract Syntax Tree (AST) macros otherwise using the same techniques, and the resulting code would likely be at least as fast as that emitted by Nim; I am surprised that no C/C++ aficionados have yet done that in your "Race". The reason that I haven't done it myself is that I regard C/C++ code as ugly and difficult compared to writing code in Nim or other languages, and no longer work directly with C/C++.
So the point needs to be made: There is more than just being able to write fast code in whatever language as there are all kinds of ways to be able to get speed as I demonstrate in your "Race" in the "top Fuel" class using an assortment of languages; code needs to be effective and elegant as well, written using modern paradigms that produce safe(r) and relatively easy-to-read-and-maintain code (if one is familiar with the languages and paradigms - such as functional programming). Note: I am not a proponent of "everything is an object" as used in mostly Object Oriented Programming (OOP) languages although I use that paradigm when it makes sense or when it is forced on me by the language.
Just as one retired Saskatchewan native to another - formerly East of Davidson...
Go Riders! But I can't agree entirely. Unless and until someone demonstrates it by implementing a faster algorithm in C++, I could maintain the language is "too hard" to do it in, and that'd be a failure and limitation of the language! And see almost my lengthy argument about asm being the fastest.
My grandfather really did work in Cobol.
He'll ask me what language I'm working in right now and invariably he'll say "Oh, I guess I don't know that one. We always used Cobol".
I taught Fortran at uni in late 70s/early 80s, and it didn't look anything like the structured Fortran here. GOTO might have been considered harmful, but we still hadn't adopted versions of Fortran that thought so!
And writing Cobol was like speaking Shakespearian English!!!
Dave, I enjoyed your video but would like to point a couple of things out.
Cobol on the mainframe is first compiled into Assembler and the compiler creates a machine code executable so it's speed will depend on the ability of the compiler to create efficient assembler code (and therefore machine code). Your Cobol example looks pretty horrible so I am not surprised that it doesn't perform so well. In the old days, C could out perform Cobol on the big IBM mainframes but, since the introduction of the new z/OS assembler commands, Cobol has got faster than C as the new compiler/binder takes full advantage of the (over 200) new assembler commands.
I would like to ask you what platform(s) you are doing these tests on and what versions (and compilers) of the different languages you are using.
Once again, thanks for your interesting vid and best wishes, John
If you don't like the example you had plenty of time to submit a better one!
@@DavesGarage No problem but I was also trying to point out that Cobol code doesn't run on any computer, the compiled code gets run which on the mainframe would be machine code. Same goes for PL/1, C or Fortran. If you are using GNU Cobol or Micro Focus Cobol on Windows or a Unix box then it gets translated to C which is then compiled. I cannot discuss your findings without knowing what I am supposed to be comparing. It's not a question of the Cobol code itself but the actual machine code that the compiler generates and gets run.
I really like this series. This is last episode so far, right? I wanted to see which language won!
Thanks for this episode 😊I am truly surprised that Cobol is that close to Fortran!
Glad to see the continuation of the series.
I had never seen any Cobol code before, and now I hope to never see any more of it.
Fortran is still used today for very specific applications, primarily when doing massively complex simulations (such as simulating the motion of an entire galaxy for example). I find that really fascinating.
Recently subscribed, great series and channel Dave, found your channel via a friend linking me to the inverse square root discussion for quake 3.
i had to learn fortran90 for a computational physics course, and i actually enjoyed it. covid hit after i finished that semester. it's also possible to split up the fortran90 code into separate pieces, and then make use of a makefile to link up their dependencies. at least, that was what i was taught in order to keep the code modular.
it's also worth noting that it can be threaded to take advantage of multicore processors. dunno about cobol.
Wheres the reveal????????????????
Yeah it’s been 9 months.
I'm fascinated. While I felt like I was going to get nauseated when I looked at COBOL, I felt almost wonderfully nostalgic looking at Fortran.
Hi Dave !! , I can't find the next videos after the E04, you said there are 3 faster programming languages , where can I find your next video after E04 ? thanks a lot !
"What's the FASTEST Computer Language?" >>> "What's the SMARTEST Language Compiler?" ... Loving this series!
So true. The language is only the human-interface to the compiler.
My Dad used to write fortran for the Allianz.
Thanks for giving me a look at what that stuff actually looks like!
Great episode as always!
My favourite feature of COBOL from my 80's programming days was the ability to write variable length records by using a level 88 redefine with a PIC X OCCURS DEPENDING UPON VARIABLE and writing that redefine having defined the maximum records lentgth with the 01 definition. This was Honeywell BULL DPS mainframe era back in the 80's. The days of no GDB or indeed any real debugging tools and your friend being able to do dump analysis by hand - fun times.
my favourite feature of COBOL-85 is the "EVALUATE" statement - kicks the butt of any 'case' or 'switch' statement
Enjoyed the video. I'm curious : what Cobol and Fortran compilers did you use? The only Fortran compiler I've found that is readily available (and at no cost), is GNU? Maybe the same for Cobol? Thanks
In this case gfortran has been used (the now rather old Version 7 for Ubuntu 18.04).
A further free fortran compilers are form the LLV project ( flang).
Intel and Nvidia provide free, but not open-source compilers (for the exact license conditions please look on the pages of the respective vendors).
So no matter what operating system you use, there is a high-quality fortran compiler available without having to pay money.
Concerning COBOL I just know GNUCobol as a free compiler (Note that in contrast to gfotran it is not part of the GCC Compiler suite, but still needs a C compiler for creating the executables. It also appears to be available on all relevant platforms (I have not tested it personally as I am not a Cobol programmer).
Hi Dave im looking some what within the early MS / DOS and Qdos but am looking at a' Fat 12 '...going by my own recolection would this be the ' File Allocation Table' ? Is this allowed open memory port storing .Store open available memory sert data in circumfrefce srent tables ,allocate ,available open readddy usable jeest RAM?
Wait, wait, WAIT! There's something out there twice as fast as C++? That's gonna be an interesting episode...
My bet is Rust.
What's more interesting he said it was faster than assembly... Now that is kind of interesting. Personally I'd think that given unlimited development time and resources assembly would always outperform everything but possibly hand wrought machine code. The amount of work needed would however be kind of ridiculous which is why high level languages were developed in the first place.
Also the whole idea behind languages such as Cobol are that they are to be decoupled from the underlaying tech as far as possible. The programmers should not have any need to know anything about how the computers actually work at a low level to write the programs they want. This might not be conductive to creating the most efficient code for a particular computer architecture, but that was never the idea.
@@blahorgaslisk7763 Regarding assembly - back in the day it was king of the hill, but it breaks down with modern CPUs. In order to optimize assembly language properly, you need to know the exact model of CPU that it will be running on, and performance can suffer greatly if you choose the wrong CPU target. The compilers are able to take advantage of special instructions on the processors that would be difficult to match in assembly language. Modern super-scaler CPUs can issue more than one instruction per cycle, under the proper conditions. That being said, knowledge of assembly language is crucial for proper tuning of higher-level languages, since you need to look at the assembly language that the compiler generates, and determine if it can be improved by changing the source code, with tricks such as manual loop unrolling, cache utilization improvements, or algorithmic improvements.
I am slightly confused. How the leader is not an assembly language? You can disassemble the code that leader provides and here it is. Or do I miss something?
I'd be curious to see how this would play out on a mobren Mainframe like a Z15 and the latest COBOL compiler from IBM (V 6.3) with a ARCH setting of 13 vs fortran and C++. Not that would be a software race on the the best hardware.
I don't know how the modern system Zs differ from the older 370EA and XA generations of system, but C used to run really badly on older systems because of some features you would normally expect to be in an ISA, but which were missing in the 370 ISA (I was using Amdahl mainframes at the time).
System Z may well have filled in these omissions, because Linux is quite a big thing on current generation mainframes (such as the Linux One mainframe systems).
Considering how the bitfield is static and predetermined, I can't help but feel like it can be calculated at compile time if you just use the right attributes and modifiers in modern c++.
Fortran compilers yield different performance.
Which Fortran compiler did you use in this test?
gfortran. I think version 8 (Version 13 is the current one).
I started as an assembly language coder in the 70s with the pre-requisit of being able to read
object code in hexi-decimal format working for Rockwell Intl. I then graduated to ForTran coding
at IBM working as senior programmer and supervisor of various manufacturing engineering projects.
I currently am self employed coding in PGI ForTan 99 on a Windows XP system developing and supporting
manufacturing engineering applications used for post-processing CAD/CAM data to machine code data
for various CNC macine tools in mostly aerospace industy. I also have personnaly met many key people in
the computer and manufacturing business including John Backus the inventor of ForTran.
I seem to remember that Fortran was brilliant but not as a language, but because it made available the NAG library calls which actually did the real work.
The NAG library is written in Fortran, mostly. There was an Algol version in the 70s, and there still is a bit of C and C++ code in it.
Fantastic. Did COBOL influence any languages?
Sure. It influenced other languages by showing what NOT to do.
One of my college profs was a former Navy programmer. The Navy had determined that their Cobol code was too slow and wanted it converted to assembly. My prof wasn't that good with assembly, so he decided to rewrite it in Fortran, do a compile with assembly and turn in that ode. He figured the Fortran compiler could optimize the code at least as well as he could. Plus, he could write a couple lines of Fortran, then get enough "debugged" assembly code to meet his daily quota, then go fishing and have a couple of beers. BTW, Cobol was allegedly self-documenting. You could buy a version of Cobol from MICROSOFT at one point. I think it was about $700 when you could just grab a copy of Python for nothing.
Back in 77, FORTRAN was the first computer language I learned, as a freshman in Electric Engineering. But it was not the the modern FORTRAN used in this episode, it was FORTRAN 66 with its three way IFs and line numbers. In punched cards, of course. In 78 I started to work as an intern and the first thing I had to learn was COBOL. And I must say that COBOL has not changed much since then.
Back in 78, structure in COBOL was not yet widespread since it was only added to COBOL in COBOL-74. Since then there has been object-oriented COBOL. Regardless, I also hated my summer job writing COBOL. And typical for the era, I moved to FORTRAN to pay my tuition fees, but was secretly in love with ALGOL derived languages (ALGOLW, ALGOL68, C, PASCAL etc.).
Bit of trivia for a nerd quiz:
The FORTRAN version before 66 was IV (read as four but never written as a digit).
Waterloo Fortran II was abbreviated to WATFOR which was a nice acronym, and was followed by WATFIV which was FORTRAN IV and not five as many ppl imagined.
Perdue University Fast Fortran was PUFFT which always made me think of magic dragons...
What Fortran compiler was used?
gfortran 7 running in an Ubuntu 18 .04 Docker image
Yay! Was looking forward to the next installment in this series.
which compiler are you using (ie ifort, gfortran, etc) and compiler flags?
gfortran 7 running on a Ubuntu 18.04 Docker image
Oprimizations -Ofast -march=native
Dave’s where do i find the final results to all tests …. Wondering has this project concluded ?
Heck, I wanna know how Rust and Go places amongst the 40 y/o (matured/evolved) langauges....
Please, don't wait too long to publish ep05 Dave. And thank you so much for the series. REAL FUN TO WATCH
Hello, the final video? :D
Looking forward to 2027 when we find out what language did 4,000. This is a cool series but seriously. I hope the pace picks up a bit.
assembly? :O
@@orkhepaj I think he already said it wasn't hand written asm.
@@DFPercush hmm , then what?
@@orkhepaj Your guess is as good as mine. I wonder if it's some functional language like Haskell or maybe ADA / some LISP variant. I would be surprised because those languages favor safety over speed, but optimizers can do funny things.
I'm going to take a Wild Ass Guess and predict it's FORTH. Although it's interpreted, it seems to be amazingly fast for certain types of operations: not a lot of runtime overhead?
Would love to see you come back to this series. I'm particularly curious about languages that use JVM like Scala and Clojure, how they compare to Java, and perhaps how they compare to themselves when programmed using OOP vs FP, as well as on JVM vs compiled to native executables, which I know you can do at least with Scala.
Appreciate the stats and data, Wow, who would have thought how widely used Cobol is!