If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Rate Thread | Display Modes |
#16
|
|||
|
|||
C is not a low level language
In article , Wolf K
wrote: About ten years ago, I was startled, speaking to a new graduate with a computing degree, when he revealed that he'd NEVER DONE ANY ASSEMBLER. Thinking about it afterwards, I decided that the computing world is certainly big enough that he'd have no difficulty in finding employment, but it did throw me at the time; I'd come up through (7-bit serial then) 8-bit, 6800 and 6502, then in professional life some work in bit-slice processors (AMD 2901, 2960, etc., and the IDT copies thereof), where we had to _create_ our assembly tools as we assembled our processors. Maybe nowadays assembler is _rare_? I think human-created assembler is becoming rare. it is indeed. machine created assembly is almost always better than anything humans can do, plus developers do not want to be tied to a single processor. However, I have the impression that robotics requires machine-coded assembler, translated from whatever human-readable language was used to program the machine. Those neat videos of dog=like robots loping through the building are misleading. Their abilities are not only severely limited, but those abilities were programmed on large machines cabled to the robot. The programs were tweaked and compressed before being copied to the on-board chips that control the "dog". where did you get that impression? that is wrong on multiple levels. Maybe someone from MIT or Boston Dynamics can clarify and correct. maybe. |
Ads |
#17
|
|||
|
|||
C is not a low level language
In article , David Brown
wrote: There are almost no programming fields that require hand-written assembly these days. Even if you are writing a compiler, or low-level libraries for a compiler, you are unlikely to need more than small sections or snippets of assembly. There are still occasions where assembly will give you noticeably more efficient results than a high-level language - but very, very few where the difference is big enough to warrant using assembly. Generally, if you think assembly is the best choice then you have picked an inappropriate processor, inappropriate tools, or an inappropriate programmer. exactly I think it is good for programmers to have some experience of assembly, and to have understanding of it - it helps give a better appreciation of how things work underneath, and can lead to the programmer writing better high-level code (especially on smaller systems). But that is different from actually writing assembly for real work. what matters the most are good algorithms, and knowing assembly does not help with that. |
#18
|
|||
|
|||
C is not a low level language
In article , NY
wrote: When I was at university, doing electronic engineering, in the early 1980s, we did a bit of assembler programming, in 6809. Everyone was very frustrated at the choice of CPU because those who had their own computer had either a CP/M-based (Z80) or Commodore Pet (6502); no-one had a computer that used a 6809 that they could practice on out of hours. Of course, a few years later, 8086 and its successors (386, 486 etc) would have been fairly universal ;-) Nowadays they'd probably also teach RISC (eg SPARC and ARM) processors. their frustration was unfamiliarity and inability to learn new things. the 6809 was a *very* good processor, along with its successor, the 68000 series,*far* easier to write code than the z80 and x86. I remember as a project we had to write software which allowed serial data to be written to a tape and to be read from it, and I don't think anyone go their equipment working: as a project it was a bit too advanced for the amount of practical time allocated to it. I *think* the high-level language we were taught was Pascal. Certainly we didn't learn C because I remember having to learn that from scratch when I started my first job. pascal was designed to be taught, but ended up being used for much, much more than that. As a little exercise I managed to code a bubble sort in Z80 on my Wren CP/M 3 computer in the mid 80s. I wrote a BASIC program which allowed a user to enter intergers into an array, then passed the address of that array to a Z80 sort routine and then returned control to the BASIC program to print it out. Developing it was fairly thankless because any mistake in the Z80 crashed the computer, requiring it to be rebooted (from floppy). But I got it working. I forget the actual timings, but I know that the Z80 routine was several orders of magnitude faster than the same algorithm in interpreted BASIC and several times faster than a compiled Pascal program with the same algorithm. that just means the compiler wasn't any good. I would imagine that the ability to write assembler for any given CPU is nowadays fairly specialised and only done for specific routines that need the absolute maximum speed. yep, and it's *really* hard to beat a modern compiler. |
#19
|
|||
|
|||
C is not a low level language
"nospam" wrote in message
... but I know that the Z80 routine was several orders of magnitude faster than the same algorithm in interpreted BASIC and several times faster than a compiled Pascal program with the same algorithm. that just means the compiler wasn't any good. yep, and it's *really* hard to beat a modern compiler The compiler in question was Turbo Pascal of mid- to late-1980s vintage, running on a Z80 under CP/M 3. I'm not sure how efficient that was. As you say, modern compilers may well be more efficient - but they may also bloat the code with code that was never actually called but was compiled in nevertheless ;-) I should have tried to code a more efficient sort algorithm than bubble sort, to see how much improvement there was, but bubble is simple to remember (*) and to code, and I was doing it as a programming exercise rather than with a real-life application that needed the fastest sort available. (*) I can remember it without needing to look it up every time, and I can visualise exactly what it's doing, whereas shell sorts start to make my brain hurt working out what they are doing ;-) |
#20
|
|||
|
|||
C is not a low level language
In message , Wolf K
writes: On 2019-02-25 10:38, David Brown wrote: [...] There are almost no programming fields that require hand-written assembly these days. Even if you are writing a compiler, or low-level libraries for a compiler, you are unlikely to need more than small sections or snippets of assembly. There are still occasions where assembly will give you noticeably more efficient results than a high-level language - but very, very few where the difference is big enough to warrant using assembly. Generally, if you think assembly is the best choice then you have picked an inappropriate processor, inappropriate tools, or an inappropriate programmer. I think it is good for programmers to have some experience of assembly, and to have understanding of it - it helps give a better appreciation of how things work underneath, and can lead to the programmer writing better high-level code (especially on smaller systems). But that is different from actually writing assembly for real work. [...] Thanks for this, +1. I'd agree that - IMO anyway - it helps for programmers to have had some _experience_ of assembler, to - as you say - have a better appreciation of how things work underneath, and write better code. But there are now probably few places where the extra efficiency assembler can yield justifies the extra cost of _using_ it. (_Knowing_ about it I still think should be part of any training course that involves programming, though.) Yes, I still think assembler can produce more efficient code - but for all but the simplest systems, coding it requires more brain power (which costs money) than is justifiable in most cases: and _checking_ it is a really stressful activity: it requires phenomenal concentration. For any sizeable project, you need to have more than one coder working on it - and the effort they have to put into translating the inputs and outputs into a standard format to talk to each other arguably defeats the efficiency gains. If they _don't_ use standard interfaces, the _maintainability_ suffers, as any maintainer has to learn the details of the bespoke interfaces. This applies also for FPGA type work, too (programming in e. g. VHDL or High-C). _Ideally_, the compiler - or optimiser - will remove unnecessary format translation stages. The two places where ultimate efficiency matter are for very long-life, low-supply situations, such as space probes or similar situations, and _very_ mass-produced and cheap devices. And even for both of those, there are plenty of examples of where something went wrong - Ariane V, that hole in MoDem/router devices from a year or two ago - though I don't know in those particular cases whether it was in assembler or a medium-level or high-level language where the problem occurred. -- J. P. Gilliver. UMRA: 1960/1985 MB++G()AL-IS-Ch++(p)Ar@T+H+Sh0!:`)DNAf The first objective of any tyrant in Whitehall would be to make Parliament utterly subservient to his will; and the next to overturn or diminish trial by jury ..." Lord Devlin (http://www.holbornchambers.co.uk) |
#21
|
|||
|
|||
C is not a low level language
In article , NY
wrote: but I know that the Z80 routine was several orders of magnitude faster than the same algorithm in interpreted BASIC and several times faster than a compiled Pascal program with the same algorithm. that just means the compiler wasn't any good. yep, and it's *really* hard to beat a modern compiler The compiler in question was Turbo Pascal of mid- to late-1980s vintage, running on a Z80 under CP/M 3. I'm not sure how efficient that was. that's not a modern compiler nor a modern processor either. As you say, modern compilers may well be more efficient - but they may also bloat the code with code that was never actually called but was compiled in nevertheless ;-) they don't, but even if that were true, code that is not called can't slow anything down. I should have tried to code a more efficient sort algorithm than bubble sort, to see how much improvement there was, but bubble is simple to remember (*) and to code, and I was doing it as a programming exercise rather than with a real-life application that needed the fastest sort available. (*) I can remember it without needing to look it up every time, and I can visualise exactly what it's doing, whereas shell sorts start to make my brain hurt working out what they are doing ;-) bubble sort is very easy to write, it's just not fast. |
#22
|
|||
|
|||
C is not a low level language
On 25/02/2019 20:05, nospam wrote:
code that is not called can't slow anything down. That's actually not true. Code that is not called but is compiled in will use up space in a memory page. If that means the code segment of the program grows beyond a page boundary, that may then mean an extra page fault will need to be handled when part of the program is swapped out, which would otherwise not be necessary. Also, code that is not called but is compiled in may confuse the speculative execution bits of modern microarchitectures into believing the code will do something when it cannot possibly do so. I remember a blog post (can't find it now) from a guy who wrote about a broken opcode on a particular PowerPC-derived processor which will always cause the processor to lock up. Due to speculative execution, even *including* that opcode in the program, even if it's not ever called, could cause the processor to lock up. |
#23
|
|||
|
|||
C is not a low level language
"nospam" wrote in message
... In article , NY wrote: but I know that the Z80 routine was several orders of magnitude faster than the same algorithm in interpreted BASIC and several times faster than a compiled Pascal program with the same algorithm. that just means the compiler wasn't any good. yep, and it's *really* hard to beat a modern compiler The compiler in question was Turbo Pascal of mid- to late-1980s vintage, running on a Z80 under CP/M 3. I'm not sure how efficient that was. that's not a modern compiler nor a modern processor either. Exactly. I imagine that a modern compiler and processor may achieve a result which was closer to the native assembly language for that processor. As you say, modern compilers may well be more efficient - but they may also bloat the code with code that was never actually called but was compiled in nevertheless ;-) they don't, but even if that were true, code that is not called can't slow anything down. I was using "efficient" in both senses - both in terms of speed and in terms of small executable size. bubble sort is very easy to write, it's just not fast. Is it any quicker if alternate passes are in opposite directions, to make the largest number bubble to the top and then the smallest number bubble to the bottom? As long as in both cases you stop the loop one number less/greater than you did last time to exclude testing the number that is now at the top/bottom of the list. |
#24
|
|||
|
|||
C is not a low level language
On 25/02/2019 19:17, nospam wrote:
what matters the most are good algorithms, and knowing assembly does not help with that. What helps with that is understanding of how a computer works; understanding the difference between registers and memory locations, and understanding how a processor performs calculations and stores the results of those calculations. It's impossible to write assembly language without at least a basic understanding of how computers work, and it's definitely the case that writing assembly language helps you learn that a bit more; so in that respect it most certainly does help with writing good algorithms. |
#25
|
|||
|
|||
C is not a low level language
Wouter Verhelst writes:
On 25/02/2019 20:05, nospam wrote: code that is not called can't slow anything down. That's actually not true. Code that is not called but is compiled in will use up space in a memory page. If that means the code segment of the program grows beyond a page boundary, that may then mean an extra page fault will need to be handled when part of the program is swapped out, which would otherwise not be necessary. This is true, but less damaging than the effect on instruction cache footprint, which effect is much larger than the effect on the TLB due to additional pages. Also, code that is not called but is compiled in may confuse the speculative execution bits of modern microarchitectures into believing the code will do something when it cannot possibly do so. Only to the extent that the code is in the execution path (e.g. a seldom executed 'else' clause), but most compilers will elide uncallable and unreachable code completely from the resutling executable. I remember a blog post (can't find it now) from a guy who wrote about a broken opcode on a particular PowerPC-derived processor which will always cause the processor to lock up. Due to speculative execution, even *including* that opcode in the program, even if it's not ever called, could cause the processor to lock up. Again, it would need to be included in a code path that _could_ have been executed (e.g. on a conditional branch) for speculative execution to have caused any issues. Speculative execution doesn't just pick up random bits of the instruction stream and start executing them. |
#26
|
|||
|
|||
C is not a low level language
In article , Wouter Verhelst
wrote: code that is not called can't slow anything down. That's actually not true. it is true. what matters is the code that *is* called. Code that is not called but is compiled in will use up space in a memory page. If that means the code segment of the program grows beyond a page boundary, that may then mean an extra page fault will need to be handled when part of the program is swapped out, which would otherwise not be necessary. that's more theoretical than anything real. nearly everything is i/o bound, and then there are all the other processes... Also, code that is not called but is compiled in may confuse the speculative execution bits of modern microarchitectures into believing the code will do something when it cannot possibly do so. modern compilers already handle that, and one reason why writing assembly that's better than what a compiler can output is *extremely* difficult. I remember a blog post (can't find it now) from a guy who wrote about a broken opcode on a particular PowerPC-derived processor which will always cause the processor to lock up. Due to speculative execution, even *including* that opcode in the program, even if it's not ever called, could cause the processor to lock up. that's called 'a bug'. |
#27
|
|||
|
|||
C is not a low level language
In article , Wouter Verhelst
wrote: what matters the most are good algorithms, and knowing assembly does not help with that. What helps with that is understanding of how a computer works; understanding the difference between registers and memory locations, and understanding how a processor performs calculations and stores the results of those calculations. that does not matter. what matters is using the best algorithm for a given task. a ****ty algorithm written in highly optimized assembly and using only registers is still a ****ty algorithm. It's impossible to write assembly language without at least a basic understanding of how computers work, and it's definitely the case that writing assembly language helps you learn that a bit more; so in that respect it most certainly does help with writing good algorithms. with *very* rare exception, there's no longer a need to write assembly language anymore. |
#28
|
|||
|
|||
C is not a low level language
On 25/02/2019 17:17, nospam wrote:
In article , NY wrote: As a little exercise I managed to code a bubble sort in Z80 on my Wren CP/M 3 computer in the mid 80s. I wrote a BASIC program which allowed a user to enter intergers into an array, then passed the address of that array to a Z80 sort routine and then returned control to the BASIC program to print it out. Developing it was fairly thankless because any mistake in the Z80 crashed the computer, requiring it to be rebooted (from floppy). But I got it working. I forget the actual timings, but I know that the Z80 routine was several orders of magnitude faster than the same algorithm in interpreted BASIC and several times faster than a compiled Pascal program with the same algorithm. that just means the compiler wasn't any good. 'Several times faster than compiled' sounds about par for the time. I would imagine that the ability to write assembler for any given CPU is nowadays fairly specialised and only done for specific routines that need the absolute maximum speed. yep, and it's *really* hard to beat a modern compiler. When writing my interpreters I can routinely get double the performance of gcc-O3, using a non-optimising compiler for 90% of the code coupled with in-line assembly for the main byte-code handlers. In this case it works through having knowledge of the big picture, not available to gcc, and using that to maintain a tight register-based ASM environment. |
#29
|
|||
|
|||
C is not a low level language
In article , Bart wrote:
I would imagine that the ability to write assembler for any given CPU is nowadays fairly specialised and only done for specific routines that need the absolute maximum speed. yep, and it's *really* hard to beat a modern compiler. When writing my interpreters I can routinely get double the performance of gcc-O3, using a non-optimising compiler for 90% of the code coupled with in-line assembly for the main byte-code handlers. In this case it works through having knowledge of the big picture, not available to gcc, and using that to maintain a tight register-based ASM environment. gcc is not an example of a modern compiler. pretty much anything is faster, it's that bad. |
#30
|
|||
|
|||
C is not a low level language
On 25/02/2019 18:50, nospam wrote:
In article , Bart wrote: I would imagine that the ability to write assembler for any given CPU is nowadays fairly specialised and only done for specific routines that need the absolute maximum speed. yep, and it's *really* hard to beat a modern compiler. When writing my interpreters I can routinely get double the performance of gcc-O3, using a non-optimising compiler for 90% of the code coupled with in-line assembly for the main byte-code handlers. In this case it works through having knowledge of the big picture, not available to gcc, and using that to maintain a tight register-based ASM environment. gcc is not an example of a modern compiler. pretty much anything is faster, it's that bad. I got similar results with clang and MSVC. I don't have access to anything else other than smaller compilers which are much worse. Are there any freely available C compilers that could generate code that is /twice/ as fast as gcc-O3 (for the app in my example)? (The performance of gcc in this case can be probably be tweaked by messing around with specialist options, adding attributes, selectively inlining etc, but then you end up spending as much effort as writing the ASM, and it still would not get anywhere near that speed-up.) |
Thread Tools | |
Display Modes | Rate This Thread |
|
|