Some History
The history of computing is long and interesting.
See: https://www.computerhistory.org/timeline/
The Antikythera mechanism
The Antikythera mechanism is believed to be the earliest known mechanical analogue computer. Designed to calculate astronomical positions.
Ada Lovelace
About a century before Konrad Zuse designed the first programmable computing machine, in the 1840s, Ada Lovelace wrote the first computer programme in the world. From a modern perspective, her work is visionary.
The First Modern “Computers”
The first modern computer “coders”, or “programmers”, in the USA. A small team of women developed the new field of computer programming during World War Two – originally hired as “computers” to calculate ballistics trajectories by hand.
These women were underpaid and undervalued. The 2017 film, “Hidden Figures” , based on the book by Margot Lee Shetterly, focuses on there African-American women who were essential to the success of early space-flight. It depicts the story of three female African-American mathematicians who played a pivotal role in astronaut John Glenn’s launch into orbit.
Five Early Computing Machines
The Zuse electro-mechanical Z3 (Germany) 1941
The Atanasoff-Berry (USA) electronic 1942
Colossus Mark 1 Electronic (UK) Feb 1944
Harvard Mark 1 IBM ASCC (USA) electro-mechanical May 1944
Computers, Computing, Computational Thinking
Computers
Generation Z (born 1997-2012) has been brought up with tablet computers and mobile smart phones from an early age.
Generation X (born 1995-1980) and Generation Millenials (born 1981-1996) and Generation Baby Boomers born 1955-1964) will have variously had exposure to the early “personal computers: such as the BBC Micro Spectrum ZX. Many of them may have played various games on these early computers. Many of them enthusiastically assembled their own computers.
Computing
Often refereed to as “Computer Science” – as a full GCSE pupils study how computers work, how to write programs and how data are represented and handled by applications; and how to design and write programs (frequently) in the Python programming language.
Computational Thinking
Computational thinking allows us to take a complex problem, understand what the problem is and develop possible solutions. Before computers can be used to solve a problem, the problem itself and the ways in which it could be resolved must be understood. Computational thinking involves taking a complex problem and breaking it down into a series of small, more manageable problems.
Programming tells a computer what to do and how to do it. Computational thinking enables you to work out exactly what to tell the computer to do.
Two important aspects of computational thinking are: (i) decomposition and (ii) abstraction.
https://www.bbc.co.uk/bitesize/guides/z7ddqhv/revision/2
Decomposition
involves analysing a complex problem or system and breaking it down into smaller parts that are more manageable and easy to understand. The smaller parts can then be examined and solved, or designed individually, as they are simpler to work with.
If a problem is not decomposed, it is much harder to solve. Dealing with a complex problem is much more difficult than breaking a problem down into subproblems and solving them, one at a time. Smaller problems are easier to understand and can be examined in more detail.
Abstraction
is the process of filtering out – essentially ignoring – the characteristics of problems that are not needed in order to concentrate on those that are needed. It is also the filtering out of specific details. From this, an idea of what is to be solved can be created.
Abstraction allows programmers to create a general idea of what the problem is and how to solve it. The process instructs them to remove all specific detail and any patterns that will not help find a solution. An idea of the problem can then be formed. This idea is known as a ‘model’.
The Old Computing & The New Computing
Google (Feb25) in response to a Serch Request “Old Computing, New Computing” – reveals an “AI Overview”:
“Old computing” refers to early computer systems utilizing large, bulky machines with technologies like vacuum tubes, requiring dedicated rooms and specialized operators, while “New computing” describes the modern era of smaller, portable devices with powerful processing capabilities, accessible to a wide range of users, often characterized by cloud computing, artificial intelligence, and interconnectedness through the internet; essentially, a shift from large, centralized mainframes to smaller, personal computers with significantly increased processing power and accessibility.”
I prefer a human-centred/HCI perspective – as described by Ben Shneiderman in his 2003 book.
Refer: “Leonardo’s Laptop; Human Needs and the New Computing Technologies” by Ben Shneiderman The MIT Press 2003.
Professor Shneiderman is an internationally recognised pioneer of computer science a deep thinker, researcher, and software/application creator who has been focused on information management and the human-computer interface.
In his book, “Leonardo’s Laptop” – he highlights “Old Computing” as what computers could do; “New Computing” is what people can do. In my book about ‘Frameworks’ – my take is enabling ordinary people (c.f. the technical view of “allowing users …..”). Technology goals serving human needs, amplifying kinship with the underprivileged; promoting positive human values.
Facilitating rather than replacing human performance; most people want to be in control (of the computer).
To quote a publisher’s perspective: “Ben Shneiderman’s book dramatically raises computer users’ expectations of what they should get from technology. He opens their eyes to new possibilities and invites them to think freshly about future technology. He challenges developers to build products that better support human needs and that are usable at any bandwidth. Shneiderman proposes Leonardo da Vinci as an inspirational muse for the “new computing.” He wonders how Leonardo would use a laptop and what applications he would create.
Shneiderman shifts the focus from what computers can do to what users can do. A key transformation is to what he calls “universal usability,” enabling participation by young and old, novice and expert, able and disabled. This transformation would empower those yearning for literacy or coping with their limitations. Shneiderman proposes new computing applications in education, medicine, business, and government. He envisions a World Wide Med that delivers secure patient histories in local languages at any emergency room and thriving million-person communities for e-commerce and e-government. Raising larger questions about human relationships and society, he explores the computer’s potential to support creativity, consensus-seeking, and conflict resolution.”
In an interview published in the ACM Journal ‘Ubiquity’ Sep02 – Professor Shneiderman spoke about how designers can help people succeed.
“Many users of today’s computer systems suffer and experience frustration in trying to use their computers. I think every designer would like to envision a future that is graceful, elegant, comfortable, satisfying, comprehensible, predictable and controllable. Those are the adjectives that we’d like to attach to the interfaces and systems that we use. The book calls for a new renaissance and fresh thinking about user needs — elevating the expectations of users and the responsibilities of designers. This focus on users leads me to the book’s repeated refrain: The old computing is about what computers can do; the new computing is about what people can do. Designers can do better in helping users succeed. Too often people are struggling because they can’t understand the menu choices, they don’t know what the dialog boxes mean, and the error messages are too frustrating and confusing. Attachments won’t open. Viruses intrude on their experience. Spam clutters their e-mail inboxes. I have found Leonardo da Vinci to be an appropriate inspirational muse for the new computing. He combined art and science and aesthetics and engineering. That kind of unity is needed once again.”
HCI Human Computer Interaction//Human Centred Computing
Human-computer interaction (HCI) is an area of research and practice that emerged in the early 1970s, initially as a speciality area in computer science.
The Reader might wonder why Shneiderman chose to write his book: “Leonardo’s Laptop” some 30-odd years later when surely the design of “human-computer interaction” and the state-of-the-art in computer interfaces (such as GUIs) should have provided a great interactive experience?
Computers, Humanity and “User Interfaces”
Between computing machines and humanity are “user interfaces”. “UX design” has taken over from “HCI”. The “I” in traditional “HCI” is interface and interaction. The “UX” in “UX Design” is user experience. The idea of user experience regarding ‘Frameworks’ is easy, positively cognitively stimulating, useful, enabling a tight coupling (with minimum barriers) of thinking (the cognitive process) to representation, visualisation, communication (the modelling process). The model (and the process of construction) as the interface to thinking.
Human-centred computing (HCC) is the study of how to better bridge the gap between computing systems and humans: to center the process of computing system design around human needs and the augmentation of our abilities. The goal of human-centred computing is to create technologies that better meet human needs, through studying the needs of humans (The University of Cambridge). Computational systems are now deeply integrated into the fabric of people’s lives. Human Centred Computing examines the impacts that such systems have on individuals, communities and societies, and to identify ways these systems can be designed to be more ethical and empowering, and to support human flourishing (The University of Oxford).
The current Feb25 Home Page of the University of Cambridge Department of Computer Science – defines the goal of human-centred computing – which ” is to create technologies that better meet human needs, through studying the needs of humans. Using diverse research methods from social science, experimental psychology, cognitive science and other disciplines, we address grand challenges such as social and emotional interaction with robots, or crossing the perceptual line between interaction with virtual and real worlds. We work with AI, machine learning and data science methods to build intelligent tools for digital life, supporting business and engineering, artistic expression and enquiry, and enabling collaborative design processes that address global challenges. Through addressing human priorities with a commitment to cross-disciplinary rigour, we make research contributions in core fields of computer science such as human-computer interaction, computer graphics, visualisation, and display technologies. Members of the group are also leaders in emerging specialist fields including affective computing, computer music, human-robot interaction, diagrammatic reasoning, computational photography, end-user programming and ubiquitous computing.”
New Thinking in Computing in the 21st Century – 2025?
Maybe best to leave it to the Elite Corporations employing thousands of designers and engineers?
The fact is that individuals who come forward with “new ideas” are singing in the dark – where established Corporations and Large Organisations are recognised for their potential contributions and tools for creativity and productivity.
What is “new thinking” in the computing landscape?
Quantum Computing?
The AI Revolution?
“Artificial Intelligence – The Revolution Hasn’t Happened Yet”
Michael Jordan, 2019
https://hdsr.mitpress.mit.edu/pub/wot7mkc1/release/10
“AI and emerging technology at Davos World Economic Forum 2024: 5 surprising things to know”
https://www.weforum.org/stories/2024/01/surprising-things-to-know-about-ai-and-emerging-technology-at-davos-2024/
Well it has now!
https://www.weforum.org/stories/2025/01/industries-in-the-intelligent-age-ai-tech-theme-davos-2025/
What is “new thinking” in the educational landscape – – at the school coal face?

This is a “Thinking Model” about AI in Education (at the school coal face) created by a computer scientist who has recently joined the teaching profession.