What is the difference between embedded programming and PC programming?

In China, the friends of embedded programming are rarely graduated from computer science, and they all graduated from automatic control and electronic related majors. These children's shoes have strong practical experience, but lack of theoretical knowledge; a large part of computer science graduated children's shoes to get online games, web pages, higher-level applications independent of the operating system. Also not very willing to engage in the embedded industry, after all, this road is not good. They have strong theoretical knowledge, but lack of relevant knowledge such as circuits. Learning in embedded needs to learn some specific knowledge, which is more difficult to walk.

Can read the embedded problem from PC machine programming, that is the first step; learn to use embedded programming ideas, that is the second step; use PC ideas and embedded ideas combined, applied to the actual project, then It is the third step.

Although I have not done industrial surveys, from the perspectives I have seen and recruited, engineers working in the embedded industry either lack theoretical knowledge or lack practical experience. Very few have both. The reason is still the problem of university education in China. This issue is not discussed here to avoid war of words. I want to list a few examples of my practice. Causes everyone to pay attention to some issues when doing projects in embedded.

first question:

Colleagues developed a serial port driver under uC/OS-II. The driver and interface were found to be in the test. A communication program has been developed in the application. The serial port driver provides a function for querying the drive buffer characters: GetRxBuffCharNum(). The high level needs to accept a certain number of characters before parsing the package. The code written by a colleague is represented by pseudo code as follows:

bExit = FALSE;

Do {

If (GetRxBuffCharNum() >= 30)

bExit = ReadRxBuff(buff, GetRxBuffCharNum());

} while (!bExit);

This code determines more than 30 characters in the current buffer and reads all the characters in the buffer into the buffer until the read is successful. The logic is clear and the ideas are clear. But this code is not working properly. If it is on a PC, there is no problem at all, and the work is abnormal. But it is really unknown in the embedded. Colleagues are very depressed, don't know why. Come and ask me to solve the problem. When I saw the code, I asked him, how is GetRxBuffCharNum() implemented? Open a look:

Unsigned GetRxBuffCharNum(void)

{

Cpu_register reg;

Unsigned num;

Reg = interrupt_disable();

Num = gRxBuffCharNum;

Interrupt_enable(reg);

Return (num);

}

Obviously, since there is a global critical region between interruput_disable() and interrupt_enable() in the loop, the integrity of gRxBufCharNum is guaranteed. However, because in the outer do { } while() loop, the CPU frequently closes the interrupt and opens the interrupt, which is very short. In fact, the CPU may not respond properly to the UART interrupt. Of course, this is related to uart's baud rate, the size of the hardware buffer, and the speed of the CPU. The baud rate we use is very high, about 3Mbps. The uart start signal and the stop signal occupy one bit. One byte takes 10 cycles. A baud rate of 3 Mbps requires approximately 3.3 us to transfer one byte. How many CPU instructions can 3.3us execute? The 100MHz ARM can execute about 150 instructions. How long does it take to close the interrupt? Generally, ARM needs more than 4 instructions to turn off the interrupt, and there are more than 4 instructions. The code that receives the uart interrupt is actually more than 20 instructions. Therefore, in this way, there is a possibility of a bug in which communication data is lost, which is reflected in the system level, that is, communication is unstable.

Modifying this code is actually very simple, the easiest way is to modify it from a high level. which is:

bExit = FALSE;

Do {

DelayUs (20); / / delay 20us, generally using empty loop instructions

Num = GetRxBuffCharNum();

If (num >= 30)

bExit = ReadRxBuff(buff, num);

} while (!bExit);

In this way, the CPU has time to execute the interrupted code, thereby avoiding the interruption of the interrupt code caused by frequent shutdown of the interrupt, and the generated information is lost. In embedded systems, most RTOS applications do not have a serial port driver. When you design your own code, you don't fully consider the combination of code and kernel. Causes deep problems in the code. The RTOS is called RTOS because of the fast response to events; the fast response of events depends on the CPU's response to interrupts. Drivers are highly integrated with the kernel in Linux systems, running together in kernel mode. Although RTOS can't copy the structure of linux, it has certain reference significance.

As you can see from the above example, embedded developers need to understand the various aspects of the code.

The second example:

Colleagues drive a 14094-string chip. The serial signal is analoged with IO because there is no dedicated hardware. Colleagues handwritten a driver, and the results were debugged for 3 or 4 days, and there is still a problem. I really can't stand it anymore, just look at it, the parallel signal of the control is sometimes normal and sometimes not normal. I looked at the code, using pseudo code is probably:

For (i = 0; i < 8; i++)

{

SetData((data >> i) & 0x1);

SetClockHigh();

For (j = 0; j < 5; j++);

SetClockLow();

}

The 8 bits of data are sequentially transmitted from bit0 to bit 7 at each high level. It should be normal. Can't see the problem? I thought about it carefully. I saw the data sheet of 14094 and I understood it. It turns out that 14094 requires that the high level of clock lasts 10 ns, and the low level lasts for 10 ns. This code does a high-level delay and does not have a low-level delay. This code is fine if the interrupt is inserted between low levels. However, if the CPU does not interrupt when it is inserted at low level, it will not work properly. So it's good or bad.

The modification is also relatively simple:

For (i = 0; i < 8; i++)

{

SetData((data >> i) & 0x1);

SetClockHigh();

For (j = 0; j < 5; j++);

SetClockLow();

For (j = 0; j < 5; j++);

}

This is completely normal. But this is still a code that can't be well ported, because the compiler can optimize the loss of these two delay loops. If it is lost, it cannot guarantee that the high level and low level will last for 10 ns, and it will not work properly. Therefore, the real portable code should make this loop a nanosecond-level DelayNs (10);

Like Linux, when powering up, first measure how long the nop instruction takes to execute, and how many nop instructions execute 10ns. Execute a certain nop command. Use compilers to prevent optimized compilation instructions or special keywords to prevent delay loops from being optimized by the compiler. As in GCC

__volatile__ __asm__("nop;");

It can be clearly seen from this example that writing a good piece of code requires a lot of knowledge to support it. What do you say?

Test5

The main pain points in obtaining customers in foreign trade are fierce market competition, high customer acquisition costs, low customer stickiness, and single marketing methods. With the changes in the market environment, traditional marketing methods have been unable to meet customer needs. Enterprises need innovative marketing methods and technical means to improve customer stickiness and loyalty, reduce customer acquisition costs, in order to obtain more business opportunities in the fierce market competition.

@The Roleand's Function-f Mechanical "Claws",Function-f Mechanical "Claws",Function-f Mechanical "Claws"

Guangdong ganzhou , https://www.cn-gangdao.com