Video Transcript:
Thus far, the videos of the 10-part whiteboard series have focused on what the characterization of the proposed strain of AI is, and why is it significant not only in regard to achieving a general variant of AI but also in enhancing the human theory of knowledge. The former involved alluding to disruptive modeling of intelligence while the latter invoked certain novel mathematical insights. Going forward, the emphasis of the videos will center around the high-level basis for a generic method to accomplish a more refined approach to technology & sciences, the ‘How’ aspect of the research. This will embody as:
- An understanding of Object-Orientation, the most fundamental and inescapable constraint inherent to the human intellect,
- A philosophical overview of the proposed theory of mind, – the Omnigect-Orientation,
- The Omnigective Cognitive Architecture – a master blueprint that encapsulates any cognitive architecture ever conceived or is humanly conceivable,
- An understanding of the Omnigect and ensuing Omnigectivity, and finally
- An impact assessment as a summary of the transformative potential of the research, organized into the categories of Technology, Science, and Philosophy.
How exactly does the consequent method be applied in realizing general artificial intelligence, wading through multitude of involved intricacies will be demonstrated as part of Project POC.
Numerous cognitive architectures have been proposed since the beginning of modern research in cognitive modeling, spanning diversities of ideas, methods, disciplines, scopes, and relative success. Several comparative studies of these architectures were carried out over years as surveys, categorizations, and evaluations. Here, the fundamental concern common across the architectures is isolated, that inhibit development of a truly artificial and general intelligence, in the context of an extract derived of the Acontextual model of cognition. The extract is comprised of five core conceptions namely, the Intelligence System, Subject-Object contract, Potential semantic schema, Heuristic subject agent enablement, and the Postulate of Consistency over Correctness. As will be seen, each of these notions is implicitly violated and offset by alternate narrower interpretations by every conceivable cognitive architecture. Though the commonality is drawn in reference to the prevailing architectures, it is in fact far deep-rooted in the very foundations of human cognitive process.
Broadly, the limitations of the existing cognitive architectures get organized into, – Programming related deficiencies, and Knowledge modeling deficiencies.
Human programming model does not allow for a separation of program contexts from the programmer contexts – with the comprising subject-object contract being implicit and always between the programmer and the programmable. Programs merely stage the act in the inevitable backdrop of the semantics resulting from programmer-programmable context fusion. This relegates the supposed intelligent agent to yet another object, a mere extension to the programmer’s intelligence. Involved proxy contextualization invalidates any synthetic intelligence system & falls back on to the encompassing human intelligence system.
Current day architectures typically employ rules, both implicit and explicit, in a bid to abstract out program contexts. However, the emphasis of the rule extraction is not as much in providing nondefault subject contexts to the programs as it is to model involved object subtleties – the underlying premise being that the ‘intelligence’ exclusively resides within object representations. This line of thought trivializes both the intelligence system and subject-object contract postulates in case of any programmatic approach to AI. Further, given that the models of computation supported by our Turing machines are intrinsically algorithmic, programming has never been a heuristically enabling process of the programs.
Knowledge modeling deficiencies
Knowledge representation formalisms such as rules, meta-rules, semantic networks spun over corpus of facts, domain ontologies, and extracted object patterns through training etc., the staple seed of intelligence for current day architectures, comprise eventualized forms of object semantics at different levels of abstraction. So modeled object characterization assumes a definite form and format outside the purview of the concerned synthetic agent before being put to use in generating intelligence. Consequent proxy eventualization mandates external intervention in terms of more human modeler element in scaling consequent intelligence, effectively containing general intelligence capabilities.
Some eventualized forms of knowledge depiction, however, tend to be more fundamental than others in that they let program scoped eventualization to a limited extent. Representation learning methods and meta-rule based AI, for instance, do produce finer-grained patterns of knowledge than were afforded but are otherwise confined in terms of indefinitely extensible eventualization and agent context explosion across domains. In general, the potential characterization of the Semantic has not permeated the prevailing knowledge modeling paradigm.
Given these limitations intrinsic to the method, human likeness of AI was traditionally sought in the consequence, irrespective of the adopted architectural approach, where emergent notion of correctness prevails over more fundamental consistency considerations of the ensued intelligence. However, a genuine intelligence such as human intellect thrives primarily on consistency, affording discretion to the subject agencies in determining correct from the incorrect, without requiring external feedback.
All in all, achieving a truly general and artificial intelligence is constrained by the very acts of programming and knowledge modeling for the purpose, both operations being inherently complicit in the proxy mechanization of the synthetic intelligent agents. As explained earlier in the video number 5.2, Adjectivization & Acontextualization address the proxy contextualization & eventualization issues respectively in the proposed method.
The umbrella term for the generic characterization of the human intellect that allows for human-specific cognition although limited by described deficiencies is what’s called Object-Orientation, termed as such in view of the nature of principal involvement. In materializing any cognition, such an orientation manifests as attributing an independent and absolute disposition to –
- given phenomenon (the Object),
- vantage to the phenomenon (the human Subject), and
- the comprised interfaces (such as, say, the human logic or sentience)
On a philosophical note, this understanding encapsulates the essence of the Cogito argument – the fundamental principle of Rene Descartes’s philosophy that arguably provides certain foundations for the knowledge in the face of radical doubt. Object-Orientation relates to all human pursuit of structured knowledge, an approach that is inevitably marked by a supplementing deterministic intent of the involved subject agent towards given object entity, as both a means and an end. In the context of this video series, the deterministic intent chiefly gets typified by the algorithmic approach to programming. Thus, the method to overcome identified limitations is all about mitigating the inherent evils of algorithmic programming, being aware that the approach itself may not simply be wished away.
To cite academic references, Husserl’s idea of the phenomenological natural attitude offers the closest parallel to Object-Orientation though its rendition is essentially qualitative limiting the operational possibilities. On the other hand, an appreciation of Object-Orientation leads to a more operationally viable theory of mind – the Omniject-Orientation, the topic for the next video of the series. Within the generic and encompassing contexts afforded by Omniject-Orientation, some of the most abstract human cognitive faculties such as consciousness, discretion, ego, memory, identities & innate sensibilities etc. get rendered practicable in simulating the human intellect. Such an endeavor is riddled with numerous challenges that find a fresh operational footing, to name a few including –
- Hegelian “dialectics” and the notion of self-consciousness,
- Kantian notions such as Apperception and Transcendental Schemata,
- Heidegger’s fundamental ontology and a host of Continental philosophical ideas.
Of course, conceiving the foundations for newer mathematics required, the Omniject oriented strain of it, in formulating the methodology forms the most formidable challenge of all.

Video 6 Summary
Simply speaking, Video #6(/10) offers a more concrete definition of the core vulnerability that limits the human method, previously introduced as the ‘human natural attitude’.
- Human innate tendency: Humans naturally separate themselves (as the subject) from the world (the object) and construct systems of knowledge between the two. These systems are necessarily ‘Object-Oriented’, in that the knowledge discernment flows in the direction of the object. This framework shapes everything—from how we think, experience, imagine, reason, and communicate to, ultimately, how we subsist. For human intellect to thrive, these object-oriented contexts are not only integral but essential. Yet this very tendency, while sustaining human intellect, presents a formidable challenge to developing a truly general form of AI.
- I think, therefore, I am: René Descartes’ Cogito argument and the related concept of radical skepticism – which arguably both describe and sustain the modern scientific method – can be seen as echoing Object-Orientation. This approach of centering inquiry on the object has undoubtedly propelled human progress, giving rise to powerful tools such as science and mathematics. Its chief strength lies in enabling correctness in the determination of the object.
- Consistency over Correctness: Considerations of correctness are fundamental to all intelligence-related pursuits. Localized gradations of consistency – such as logical conformity, contextual alignment, and even meta-contextual coherence – are part and parcel of this correctness. Within Omnijective frameworks, however, the idea of consistency is global and systemic, referring to a unified process blueprint that accounts for every dimension of human or human-like cognition. Achieving this systemic consistency is central to defining any method for human-like general AI, with notions of correctness emerging naturally from that consistency.
- Omniject-Orientation: Although Omniject-Orientation is introduced here as a new “theory” of mind offering a more holistic alternative to object-orientation, it is better understood as a “method” for building a human-like mind. Given its goal of defining a single, consistent process that captures every nuance of human cognition, it is more aptly described as a “method for everything”. Omniject-Orientation is the focus of the entire Video 7 subseries – The Omnijective Method: From Beyond the Singularity.
- A First-of-Its-Kind Exploration: The Video 7 subseries presents a complex framework in an accessible way without diluting its depth. It argues that radical skepticism falls short for developing a general AI method, calling instead for an even deeper level of questioning. While challenging everything we know and accept today, it seeks to offer a method grounded in what we can actually do.