Launching Xcode If nothing happens, download Xcode and try again. Latest commit. Set theme jekyll-theme-cayman. Git stats 5 commits 1 branch 0 tags. As we explore the older version 4T instructions, which operate seamlessly on even the most advanced Cortex-A and Cortex-R processors, the Cortex-M architecture resembles some of the older microcontrollers in use and requires a bit of explanation, which we ll provide throughout the book The Cortex-A and Cortex-R Families The Cortex-A line of cores focuses on high-end applications such as smart phones, tablets, servers, desktop processors, and other products which require significant computational horsepower.
These cores generally have large caches, additional arithmetic blocks for graphics and floating-point operations, and memory management units. At the high end of the computing spectrum, these processors are also likely to support systems containing multiple cores, such as those found in servers and wireless base stations, where you may need up to eight processors at once.
Newer, bit architectures include the A57 and A53 processors. In many designs, equipment manufacturers build custom solutions and do not use off-the-shelf SoCs; however, there are quite a few commercial parts from the various silicon vendors, such as Freescale s i.
Most importantly, there are very inexpensive evaluation modules for which students and instructors can write and test code, such as the Beaglebone Black board, which uses the Cortex-A8. When the driver presses on the brake pedal, the system is expected to have completely deterministic behavior there should be no guessing as to how many cycles it might take for the processor to acknowledge the fact that the brake pedal has been pressed! In complex systems, a simple operation like loading multiple registers can introduce unpredictable delays if the caches are turned on and an interrupt comes in at the just the wrong time.
Safety also plays a role when considering what might happen if a processor fails or becomes corrupted in some way, and the solution involves building redundant systems with more than one processor.
X-ray machines, CT scanners, pacemakers, and other medical devices might have similar requirements. These cores are also likely to be asked to work with operating systems, large memory systems, and a wide variety of peripherals and interfaces, such as Bluetooth, USB, and Ethernet. Oddly enough, there are only a handful of commercial offerings right now, along with their evaluation platforms, such as TMS and RM4 lines from TI The Cortex-M Family Finally, the Cortex-M line is targeted specifically at the world of microcontrollers, parts which are so deeply embedded in systems that they often go unnoticed.
As the much older, 8-bit microcontroller space moves into bit processing, for controlling car seats, displays, power monitoring, remote sensors, and industrial robotics, industry requires a variety of microcontrollers that cost very little, use virtually no power, and can be programmed quickly.
The Cortex-M family has surfaced as a very popular product with silicon vendors: in , licenses were held by companies, with their parts costing anywhere from two dollars to twenty cents.
The Cortex-M0 is the simplest, containing only a core, a nested vectored interrupt controller NVIC , a bus interface, and basic debug logic. Its tiny size, ultra-low gate count, and small instruction set only 56 instructions make it well suited for applications that only require a basic controller.
The Cortex-M1 was designed specifically for FPGA implementations, and contains a core, instructionside and data-side tightly coupled memory TCM interfaces, and some debug logic.
For those controller applications that require fast interrupt response times, the ability to process signals quickly, and even the ability to boot a small operating system, the Cortex-M3 contains enough logic to handle such requirements. Like its smaller cousins, the M3 contains an NVIC, MPU, and debug logic, but it has a richer instruction set, an SRAM and peripheral interface, trace capability, a hardware divider, and a single-cycle multiplier array. The Cortex-M4 goes further, including additional instructions for signal processing algorithms; the Cortex-M4 with optional floatingpoint hardware stretches even further with additional support for single-precision floating-point arithmetic, which we ll examine in Chapters 9, 10, and At the most fundamental level, we can look at machines that are given specific instructions or commands through any number of mechanisms paper tape, switches, or magnetic materials.
The machine certainly doesn t have to be electronic to be considered. For example, in Joseph Marie Jacquard invented a way to weave designs into fabric by controlling the warp and weft threads on a silk loom with cards that had holes punched in them. Those same cards were actually modified see Figure 1. During the process of writing even short programs, these cards would fill up boxes, which were then handed to someone. Woe to the person who spent days writing a program using punch cards without numbering them, since a dropped box of cards, all of which looked nearly identical, would force someone to go back and punch a whole new set in the proper order!
However the machine gets its instructions, to do any computational work those instructions need to be stored somewhere; otherwise, the user must reload them for each iteration.
The stored-program computer, as it is called, fetches a sequence of instructions from memory, along with data to be used for performing calculations. In essence, there are really only a few components to a computer: a processor something to do the actual work , memory to hold its instructions and data , and busses to transfer the data and instructions back and forth between the two, as shown in Figure 1.
Those instructions are the focus of this book assembly language programming is the use of the most fundamental operations of the processor, written in a way that humans can work with them easily. These interfaces connect to both the central processing unit CPU and the memory; however, embedded systems may not have any of these components! Consider a device such as an engine controller, which is still a computing system, only it has no human interfaces.
The totality of the input comes from sensors that attach directly to the system-on-chip, and there is no need to provide information back to a video display or printer. To get a better feel for where in the process of solving a problem we are, and to summarize the hierarchy of computing then, consider Figure 1. At the lowest level, you have transistors which are effectively moving electrons in a tightly controlled fashion to produce switches.
When gates are used to build blocks such as full adders, multipliers, and multiplexors, we can create a processor s architecture, i. The processor then has a language of its own, which instructs various elements such as a multiplier to perform a task; for example, you might tell the machine to multiply two floating-point numbers together and store the result in a register.
We will spend a great deal of time learning this language and seeing the best ways to write assembly code for the ARM architecture.
The binary number system, therefore, lends itself to use in computer systems more easily than base ten numbers. Numbers in base two are centered on the idea that each digit now represents a power of two, instead of a power of ten. In base ten, allowable numbers are 0 through 9, so if you were to count the number of sheep in a pasture, you would say 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, and then run out of digits.
Therefore, you place a 1 in the 10 s position see Figure 1. Now imagine that you only have two digits with which to count: 0 or 1. To count that same set of sheep, you would say 0, 1 and then you re out of digits. We know the next value is 2 in base ten, but in base two, we place a 1 in the 2 s position and keep counting 10, 11, and again we re out of digits to use. A marker is then placed in the 4 s position, and we do this as much as we like.
You will see quickly that a number such as normally doesn t raise any questions until you start using computers. At first glance, this is interpreted as a base ten number one hundred one. However, careless notation could have us looking at this number in base two, so be careful when writing and using numbers in different bases. After staring at 1 s and 0 s all day, programming would probably have people jumping out of windows, so better choices for representing numbers are base eight octal, although you d be hard pressed to find a machine today that mainly uses octal notation and base sixteen hexadecimal or hex, the preferred choice , and here the digits hundreds 8 ones 3 tens FIGURE 1.
These numbers pack quite a punch, and are surprisingly big when you convert them to decimal. Since counting in base ten permits the numbers 0 through 9 to indicate the number of 1 s, 10 s, s, etc. In other words, to count our sheep in base sixteen using only one digit, we would say 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, and then we can keep going since the next position represents how many 16 s we have. So the first six letters of the alphabet are used as placeholders.
Once we ve reached F, the next number is Find the decimal equivalent of A5E9 There are tables that help, but the easiest way is to simply evaluate how many times a given power of sixteen can go into your number. Since 16 3 is , there will be none of these in your answer.
The next highest power is 16 1, and this goes into 94 five times with a remainder of Our number in hexadecimal is therefore E. Table 1. It s unfortunate that something inside microprocessors cannot interpret a programmer s meaning, since this could have saved countless hours of debugging and billions of dollars in equipment.
Programmers have been known to be the cause of lost space probes, mostly because the processor did exactly what the software told it to do. When you say 0x6E, the machine sees 0x6E, and that s about it. This could be a character a lowercase n , the number in base ten, or even a fractional value!
We re going to come back to this idea over and over computers have to be told how to treat all types of data. The programmer is ultimately responsible for interpreting the results that a processor provides and making it clear in the code.
In these next three sections, we ll examine ways to represent integer numbers, floating-point numbers, and characters, and then see another way to represent fractions in Chapter Integer Representations For basic mathematical operations, it s not only important to be able to represent numbers accurately but also use as few bits as possible, since memory would be wasted to include redundant or unnecessary bits. Integers are often represented in byte 8-bit , halfword bit , and word bit quantities.
They can be longer depending on their use, e. Unsigned representations make the assumption that every bit signifies a positive contribution to the value of the number. Signed representations make the assumption that the most significant bit is used to create positive and negative values, and they come in three flavors: sign-magnitude, one s complement and two s complement.
Sign-magnitude is the easiest to understand, where the most significant bit in the number represents a sign bit and all other bits represent the magnitude of the number. A one in the sign bit indicates the number is negative and a zero indicates it is positive. The sign would be the sign of the larger number, in this case a zero.
Fortunately, sign-magnitude representations are not used that much, mostly because their use implies making comparisons first, and this adds extra instructions in code just to perform basic math. To create a negative value in this representation, simply invert all the bits of its positive, binary value. The sign bit will be a 1, just like sign-magnitude representations, but there are two issues that arise when working with these numbers.
The first is that you end up with two representations for 0, and the second is that it may be necessary to adjust a sum when adding two values together, causing extra work for the processor. Consider the following two examples.
Solution To create in one s complement, simply write out the binary representation for , and then invert all the bits: Adding gives us carry The problem is that the answer is actually , or 0x70 in hex.
In one s complement notation, a carry in the most significant bit forces us to add a one back into the sum, which is one extra step: EXAMPLE 1. Solution Again, simply take the binary representation of the positive value and invert all the bits to get Two s complement representations are easier to work with, but it s important to interpret them correctly. As with the other two signed representations, the most significant bit represents the sign bit.
However, in two s complement, the most significant bit is weighted, which means that it has the same magnitude as if the bit were in an unsigned representation. For example, if you have 8 bits to represent an unsigned number, then the most significant bit would have the value of 2 7, or If you have 8 bits to represent a two s complement number, then the most significant bit represents the value Notice in the above calculation that the only negative value was the most significant bit.
Make no mistake you must be told in advance that this number is treated as a two s complement number; otherwise, it could just be the number in decimal.
The two s complement representation provides a range of positive and negative values for a given number of bits. For example, the number 8 could not be represented in only 4 bits, since sets the most significant bit, and the value is now interpreted as a negative number 8, in this case. Arithmetic operations now work as expected, without having to adjust any final values. To convert a two s complement binary number back into decimal, you can either subtract one and then invert all the bits, which in this case is the fastest way, or you can view it as 2 7 plus the sum of the remaining weighted bit values, i.
Very large and very small values can be constructed by using a floating-point representation. While the format itself has a long history to it, with many varieties of it appearing in computers over the years, the IEEE specification of Standards Committee formally defined a bit data type called single-precision, which we ll cover extensively in Chapter 9. These floating-point numbers consist of an exponent, a fraction, a sign bit, and a bias.
The most significant fraction bit has the value 0. To ensure all exponents are positive numbers, a bias b is added to the exponent e. For single-precision numbers, the exponent bias is While the range of an unsigned, bit integer is 0 to , the positive range of a single-precision floating-point number, also represented in 32 bits, is to!
Note that this is only the positive range; the negative range is congruent. The amazing range is a trade-off, actually. Floating-point numbers trade accuracy for range, since the delta between representable numbers gets larger as the exponent gets larger. Integer formats have a fixed precision each increment is equal to a fixed value. Single precision provides typically 6 9 digits of numerical precision, while double precision gives Special hardware is required to handle numbers in these formats.
Historically, floating-point units were separate ICs that were attached to the main processor, e. Floating-point units are often quite large, typically as large as the rest of the processor without caches and other memories. For these reasons, most microcontrollers do not include specialized floating-point hardware; instead, they use software routines to emulate floatingpoint operations. There is actually another format that can be used when working with real values, which is a fixed-point format; it doesn t require a special block of hardware to implement, but it does require careful programming practices and often complicated error and bounds analysis.
Fixed-point formats will be covered in great detail in Chapter Character Representations Bit patterns can represent numbers or characters, and the interpretation is based entirely on context. For example, the binary pattern could be the number 65 in an audio codec routine, or it could be the letter A.
The program determines how the pattern is used and interpreted. Fortunately, standards for encoding character data were established long ago, such as the American Standard Code for Information Interchange, or ASCII, where each letter or control character is mapped to a binary value. While most devices may only need the basic characters, such as letters, numbers, and punctuation marks, there are some control characters that can be interpreted by the device.
For example, old teletype machines used to have a bell that rang in a Pavlovian fashion, alerting the user that something exciting was about to happen. The control character to ring the bell is 0x Other control characters include a backspace 0x08 , a carriage return 0x0D , a line feed 0x0A , and a delete character 0x7F , all of which are still commonly used.
Using character data in assembly language is not difficult, and most assemblers will let you use a character in the program without having to look up the equivalent hexadecimal value in a table. Each set is unique to that particular processor.
These instructions might tell the processor to add two numbers together, move data from one place to another, or sit quietly until something wakes it up, like a key being pressed. However, all instruction sets have some common operations, and learning one instruction set will help you understand nearly any of them.
The instructions themselves can be of different lengths, depending on the processor architecture 8, 16, or 32 bits long, or even a combination of these. For our studies, the instructions are either 16 or 32 bits long; although, much later on, we ll examine how the ARM processors can use some shorter, bit instructions in combination with bit Thumb-2 instructions.
The parameter called count is a convenience that allows the programmer to use names instead of register numbers. Robert D. Robert M. Roger J. T Turner's Book: Japan Travel Guide Must-see attractions, wonderful hotels, excellent restaurants, valuable tips and so much more!
T Turner's Book: New Zealand Travel Guide Must-see attractions, wonderful hotels, excellent restaurants, valuable tips and so much more! Apr 30, - Tuesday, 30 April at Assembly Language Step-by-Step. The concurrent operation of the CPU and peripherals is highlighted throughout as critical to creating cost-effective embedded systems.