When working on an audio recording project, first you MUST know the thing you're handling - SOUND.
SOUND, is being explained as waves of air particles' movement - being compressed (higher than normal air pressure) and decompressed (lower than normal pressure). Our ears are designed to detect such a very tiny variation of air pressure, converted to our interpretation of sound. Our two ears, though just 6 inches apart, can even tell the tiny difference in time of arrival of sound; to create a 3D model of our environment and positioning sound source.
So, SOUND has characteristics of waves - it has FREQUENCY, AMPLITUDE, WAVE LENGTH, PHASE, SPEED. It also has REFLECTION and ABSORPTION. Every sound engineer keeps these in mind and work with these characteristics with respect.
I'm not here try to explain every one of these terms. You will find tons of materials by searching on these key words. However, here're some examples how the understanding of sound will help a sound engineer.
(1) REFLECTION - When listening to a audio recording, sometimes we "feel" the existence of a room, because of all the little echoes (sound delays) from the reflected sound. We called it reverbration. By studying how sound reflected back to the mic, we may be able to reduce the echoes. Such as changing the mic position, adding carpets, or thick curtains, etc. Or, simply move closer to the mic, will increase the proportion of direct sound to mic.
(2) FREQUENCY - At times, we "feel" the existence of a mic. Often it's due to the proximity effect. It is a behaviour of mic, when the sound source is TOO CLOSE to the mic, it tends to boost the low frequencies. The excessive low frequencies make us "feel" the mic. The easiest way is to ask the speaker to stand back a bit away from mic. If can't, cutting some low frequency range in your EQ will help.
(3) PHASE - When you are recording from more than one mic, a problem called phase cancellation can happen. That is, waves from different mics cancel out one another, simply because they are of different phase - one being compression wave while the other is decompression wave. The result, sound level decrases. Some meters can detect phase problems. With careful positioning of the mics, usually this can be avoided. Always observe the golden 1:3 rule. That is, the distance - sound source to the other mic should maintain a distance at least 3 times the distance from the main mic.
(4) SPEED - Suppose you have a chance to broadcast sound to a large crowd with multiple loudspeakers, covering a great distance, sound delay must be considered. Sound engineers will actually calculate the speed of sound and make good use of it to delay signals to farther rows of loudspeakers in order to reinforce sound.
These are just several examples of how sound engineers will put their knowledge of sound to good use. Surely it will also generate on us more respect on it and handle it with respect.