Computing and Information Principles
Description
Pragmatically, there are three major physical aspects to “IT infrastructure” relevant to the practitioner:
-
Computing cycles (sometimes called just “compute”)
-
Memory and storage (or “storage”)
-
Networking and communications (or “network”)
Compute
Compute is the resource that performs the rapid, clock-driven digital logic that transforms data inputs to outputs.
Software is the thing that structures the logic and reasoning of the “compute” and allows for the dynamic use of inputs to vary the output following the logic and reasoning laid down by the software developer. While the computers process instructions at the level of “true” and “false”, represented as binary “1s” and “0s”, because humans cannot easily understand binary data and processing, higher-level abstractions of machine code and programming languages are used.
It is critical to understand that computers, traditionally understood, can only operate in precise, “either-or” ways. Computers are often used to automate business processes, but in order to do so, the process needs to be carefully defined, with no ambiguity. Complications and nuances, intuitive understandings, judgment calls – in general, computers cannot do any of this, unless and until you program them to – at which point the logic is no longer intuitive or a judgment call.
Creating programs for a specific functionality is challenging in two different ways:
-
Understanding the desired functionality, logic, and reasoning of the intended program takes skill as does the implementation of that reasoning into software and requires much testing and validation
-
The software programming languages, designs, and methods used can be flawed and unable to withstand the intended volume of data, user interactions, malicious inputs, or careless inputs, and testing for these must also be done, known as abuse “case testing”
Computer processing is not free. Moving data from one point to another – the fundamental transmission of information – requires matter and energy, and is bound up in physical reality and the laws of thermodynamics. The same applies for changing the state of data, which usually involves moving it somewhere, operating on it, and returning it to its original location. In the real world, even running the simplest calculation has physical and therefore economic cost, and so we must pay for computing.
Storage
Storage is the act of computation that is bound up with the concept of state, but they are also distinct. Computation is a process; state is a condition. Many technologies have been used for digital storage [Computer History Museum]. Increasingly, the IT professional need not be concerned with the physical infrastructure used for storing data. Storage increasingly is experienced as a virtual resource, accessed through executing programmed logic on cloud platforms. “Underneath the covers” the cloud provider might be using various forms of storage, from Random Access Memory (RAM) to solid state drives to tapes, but the end user is, ideally, shielded from the implementation details (part of the definition of a service).
In general, storage follows a hierarchy. Just as we might “store” a document by holding it in our hands, setting it on a desktop, filing it in a cabinet, or archiving it in a banker’s box in an offsite warehouse, so computer storage also has different levels of speed and accessibility:
-
On-chip registers and cache
-
RAM, aka “main memory”
-
Online mass storage, often “disk”
-
Offline mass storage; e.g., “tape”
Networking
With a computing process, one can change the state of some data, store it, or move it. The last is the basic concern of networking, to transmit data (or information) from one location to another. We see evidence of networking every day; coaxial cables for cable TV, or telephone lines strung from pole to pole in many areas. However, like storage, there is also a hierarchy of networking:
-
Intra-chip pathways
-
Motherboard and backplane circuits
Like storage and compute, networking as a service increasingly is independent of implementation. The developer uses programmatic tools to define expected information transmission, and (ideally) need not be concerned with the specific networking technologies or architectures serving their needs.
Evidence of Notability
-
Body of computer science and information theory; for example, Alonzo Church, Alan Turing, and Claude Shannon
-
Basic IT curricula guidance and textbooks
Limitations
-
Quantum computing
-
Computing where mechanisms become opaque (e.g., neural nets) and therefore appear to be non-deterministic
Related Topics