Friday, April 19, 2024
HomeProductsConstruction Technologies – DesignTrends of Artificial Intelligence in Construction Management

Trends of Artificial Intelligence in Construction Management

Artificial intelligence (AI) approaches have been developed since the upcoming of Information Technologies beginning in the 1950s. With rising computing power, the discussion of AI usefulness has been refuelled by new powerful algorithms and, in particular, the availability of the internet as a vast resource of unstructured data.

This gives hope to construction management in particular, since construction projects are recently becoming larger and more complex, i.e. encompassing more and more participants focusing on diverging interests while the given frames of time and budget are getting tighter. Finally, construction management is used to establish an efficient organisation of all these issues and be able to predict the result with a high degree of precision and certainty.

This could be accomplished by the human mind when projects were smaller, but with recent development the human mind is clearly pushed to its limits. On this background, the possible support of AI to organisational tasks needs to be investigated on a theoretical level prior to developing tools. This paper is the extended version of the article ‘Artificial Intelligence in Construction Management – a Perspective’, presented at the Creative Construction Conference 2019 where the algorithmic and entropic scope of AI is investigated in the context of construction management. However, efficient organisation is about restructuring systems into a set of well-separated subsystems, where human intelligence is required to bring in mainly two higher principles which AI fails to provide: the ability to prioritise and creativity allowing for new approaches not derived from given data.

This paper additionally focuses on the aspect of in-situ coordination. This service is an aspect of organisation which is not separable and can therefore only be treated as a self-determined subsystem, located outside of hierarchical control. At this point algorithms of AI need to be investigated not so much as to substitute human mind but to provide significant support.

1. Introduction

Often the upcoming equality of nature and AI has been predicted and postponed. Concepts of AI have been developed according to the understanding of natural intelligence and received great acknowledgement but often the human mind has been understood a little bit further and therewith artificial attempts failed to keep up. With the upcoming rise of available computing power, ancient principles have been revitalised and are now reaching heights which truly seem to be able to compete with the human brain. Not much of the fundamental concepts has changed, but the complexity of the outcome reveals astonishing heights.

During the advent of computer sciences, the idea of the complex task of managing unique projects with the support of computers was established. Soon, algorithms on the basis of Theory of Graphs allowed computing CPM, MPM and PERT networks. Still resting on strict definitions of situations, solutions were not achievable in many cases due to the often contradictory character of bivalent restrictions or due to causal loops inhibiting determined procedures. As a consequence, fuzzy variables were introduced first on PERT diagrams, where durations were determined by BETA distributions, not on the solid ground of measured probabilities but on estimations given by experienced managers. Later, introduced the concept of fuzzy sets in order to model vaguely determined parameters as well as equivalently vague interactions, provided by a vast range of expert knowledge. The main problem occurred not to be the well-working maths but on the one hand the procedure of obtaining sufficiently general and numerous situations to derive fundamental knowledge from and on the other hand the applicability of equivalently vague resulting instructions and targets for the final execution. So far, the availability of reliable knowledge and information regarding the overall organisational tasks as well as for more local coordination problems needs to be investigated in order to understand the possibly provided AI support to decision-making.

As an extended version of the article ‘Artificial Intelligence in Construction Management – a Perspective’, this paper focuses additionally on utilising AI capabilities for local in-situ coordination problems, particularly those tackled in Section 6.

Artificial Intelligence in Construction Management

2. Construction Management as Complex Task

AI seems to be a promising approach to solve problems, which over challenge a human mind, either it may be due to the sheer volume of data, of processes or, in particular, the given complexity. Issues concerning the volume of a task are understood as predestined for computer applications while the term of complexity is different. Definitions of complexity are given. They are mainly referring to the behaviour of a system which cannot be described by the local properties or next neighbour interactions but by the total interacting system. Such behaviour is in contrast to locality understood as being emergent. In former times, as long as the volumes of construction projects have been limited, and due to a fairly strict separation of contractual work into trades, this task was manageable, however with some effort and not always successfully. Meanwhile, construction and real-estate projects are becoming much more voluminous, as are budgetary and temporal margins becoming tighter, e.g. with large turnkey-ready buildings. With this development and clearly indicated by the observation of an increasing number of publicly known disastrous projects, construction management on this scale is in fact possibly beyond the scope of a limited human mind. Construction management is just an efficient organisation of a large number of participating people or groups and an equally large number of technical construction elements or, more abstractly, virtual units such as activities, services and cost, including their vast set of nonlinear relationships. Clearly, the behaviour of a system ‘construction project’ is emergent and the task of construction management would be to lead it nonetheless with high certainty within a very narrow corridor to a very tight goal in terms of time and cost. If this is beyond the capabilities of man, will it be within the scope of AI?

3. Technical Approaches to Model AI

Since ‘artificial’ implies the creation of this ‘intelligence’ by the human mind, first the principle methods and algorithms need to be laid out.

3.1 Principle of Industriousness: Procedural Formulation

With procedural and imperative formulation of tasks, a set of rules is elaborated in a way that a complex task is treated correctly and the correct results are provided. These rules and instructions, developing from a well-defined state to a next also well-defined state, are processed for huge volumes of data or a long time, possibly repeatedly or iteratively. The resulting behaviour looks like ‘AI’, but all the ‘I’ is preprogrammed including all foreseeable particular and specific situations. This is applied with classical software programming processes and thus it poses the problem of formulating in short, but absolutely strict, the rules and instructions which represent the complete and complex behaviour. In construction management, this concept is made use of, e.g. within a costing software, where values of single items are precisely assigned to a node within an unambiguous structure and locally treated there according to very clear rules. Only the application of the repeated instructions of where to receive the values from, how to process and in particular where to cumulate allows for an overall correct result of costing.

Comment and evaluation for construction management: This approach is widely used in construction management but still it is based purely on human intelligence: Any not precisely understood development of a system cannot be brought down to instructions and therefore never be actively modelled. Therefore, this offers no progress on tackling complexity beyond human capabilities.

3.2 Virtual Intelligent Behaviour: Object- Oriented Formulation

In contrast to imperative formulations where the creative mind is located outside the system, object-oriented design facilitates intrinsic behaviour. A large number of local rules connecting objects to determine properties of a destination object are set up as being valid and correct. Due to their strictly local character, their validity can easily be proven. Applying these to a large set of data, i.e. objects with their properties, recursively or iteratively, represents the behaviour of the system. The interaction of the local rules and the un-predetermined type and structure of the data set lead to emergent behaviour. This again is true as long as the set of rules is locally true and complete. The emergent behaviour reflects the correct situation and looks intelligent. However, a system like this produces ‘intelligent’ behaviour as long as simply emergent behaviour is intelligent and not just complex. It is in fact just unintelligible. If the local rules are well-determined, i.e. the system is perfectly well-known, then the resulting character is in fact reflecting the reality to be expected (besides some artefacts due to the imperfect modelling and execution of the local rules). This is the fundamental approach to simulation, mainly used with iterative processing. The mathematical means to model systems of this type close to the state of equilibrium are well established and finally lead to a set of linear differential equations. Far away from equilibrium states, the methods analysing the adjacency matrix allow for investigating the role of the participating elements regarding the stability of the system.

Remark: If otherwise the behaviour to be modelled is known and the local rules to achieve this are to be developed, things become more difficult. This would be the case with object-oriented programming, which is applicable only if the behaviour is clearly assignable to a limited number of local rules.

The challenge of intelligence modelled on the basis of interacting objects is that in particular correctness and completeness of the set-up are fairly difficult to ensure but nonetheless inevitable. In Construction Engineering this concept is used, e.g. for all sorts of Finite Element methods, in construction management as the basis of the Ford Algorithm for positioning activities on the time axis and simulation of processes, or for clash detection within a Building Information Model (BIM). Furthermore, there is some potential in utilising this approach to support in-situ coordination service.

Remark: This approach poses a specific problem: Trying to create such systems leads to making the users and the constructors slaves to it, even if the complex behaviour is found not to match the observed or required reality. This is mainly because there are no ways to track an error back to a single rule to be modified. Observing the emergent behaviour, in particular the part not matching it, allows only to test-wise introduce additional rules and to observe the hopefully improved result again. The connection between result and input is a very strict one-way road.

Comment from the view of construction management: This approach is also widely used in construction management. In particular, where all the local information is given, it is clear that complexity is only the emergent behaviour of the total system controlled by nothing else. However, the completeness of the primary description of the system is essential to this understanding since complexity implies the characteristic of irreducibility, i.e. missing a single element or interaction may change the whole picture to an incalculable degree.

3.3 Neural Networks

A completely different approach is maintained by the concept of neural networks. This is an attempt to fight a way back from observed emergent behaviour to local rules. Intelligent behaviour can be observed in the real world, e.g. based on intelligent decisions of clever individuals. Methods such as neural networks construct weighted sets of rules, capable to reproduce these decisions and extrapolating this behaviour to proximate situations. This is an attempt mimicking the learning process of a human brain. However, a fundamental model needs to be constructed, utilising a set of parameters that are to be optimised for a proper representation of correct implications. This procedure implies that the pre-constructed model is correct or at least sufficiently general to cover the given issues. With neural networks, the underlying model is a multilayered linear combination of possibly nonlinear triggering elements. Such a pre-set is already restricting the possible output, nevertheless it is promising. The neural network approach is particularly successful in optical and acoustical pattern recognition tasks. However, the developing processes require a huge set of information to learn, in particular somewhat orthogonal rules in order to cover a given space of situations. Due to the requirement of comparability of solution spaces, the resulting intelligence is limited to answers already experienced by the learning material, never reaching beyond these.

Comment from the view of construction management: This approach is fairly close to the historical attempts tackling construction management during the 1980s. That time it suffered from both the lack of sufficient data to learn from and the computing capacities to process this volume accordingly. Both seem to have increased largely at present. So, speaking of AI to cover general construction management issues focuses solely on the neural network approach.

4. Fundamental Approach to AI

AI is expected to decide better as or equal to a human being exposed to the same situation and parameters, i.e. based on an identical level of existing information. On the background of the second law of thermodynamics within a closed system, only the total entropy S will increase. Understood as a measure of information, we have S = ln2(I) ln2 and therewith the main law of informatics stating that information can only be lost, i.e. destroyed, but never be generated. In particular, this allows AI principally not to generate knowledge, which is not primarily existing, but only to process existing information into decisions based on mechanistic rules.

Remark: The technical approaches to AI discussed previously take account of this principle.

In contrast to this, an intelligent (human) mind might be able to contradict this principle. A comparable situation investigating the existence and entropic situation of the Maxwell Demon. Taking into account the requirement of entropy to accomplish the measurements on which a decision is based, the gained entropy exactly balances the spent entropy and thus proves the validity of the Second Law of Thermodynamics even for the action of an intelligent mind. However, this applies only for deterministic reasoning where rules and information are given on which all the decisions are purely based. So far, this again comes down to pure industriousness. Bringing in the creative mind, where we assume decisions to be made on the background of taste and fantasy, such consideration might not apply. In particular, Investigated generating information of different types, e.g. by an intelligent mind, a creative being and finally self-organising mechanisms. Considering intelligent and creative beings, this may happen on the entropy cost of the existence of themselves, but self-organisation presents itself as revealing information which was just hidden within the system and had no opportunity to establish visibly. Therefore AI is understood to operate within a closed system where only causal reasoning is possible, even if mechanisms of self-organisation are taken into account. As soon as external knowledge, i.e. information, is required, possibly in the form of creativity (of an external mind), the system is no more a closed one.

Remark: The Turing Test, which is agreed on to be the fundamental test for an AI, simply compares the machine to the human mind conducting a lengthy discussion. Besides simple characteristics such as response time, which comes down to computing power, it is merely the capability to answer questions in a way a human being considers an equal counterpart. It is not a competition and can principally not state superiority over the human mind.

Artificial Intelligence in Construction Management

5. General Construction Management Situation

On this background the question arises, to which degree the operation of construction processes, i.e. construction management, is bound and completely determined by available rules and information. With scheduling, all boundary conditions, as there are activities, relationships and fixed dates are given. The task remains to find an optimal solution to the minimising problem of the construction time while obeying all the predefined conditions, which is just an ordinary well-understood task. So far, the problem can be separated into two independent segments. First the situation needs to be described completely and accurately earlier, and second any mechanism may approach the optimisation problem. The latter is certainly – difficult or simple – a matter of causal reasoning, probably minimising losses including soft facts such as fuzzy variables or probabilities, and thus it may be subjected to AI or human intelligence without violating major principles as the generation of entropy. However, the major task would be to describe the situation accordingly. Mathematical completeness is not feasible, so it comes down to judging the relevant issues to be modelled. Again this may be subjected to causal reasoning as well, but on the background of individual large and complex projects seems not possible and therefore again becomes decidedly a matter of a creative mind. Based on this, principally only support, no substitute can be expected from algorithms of AI.

5.1 Database

One of the most popular applications of AI is the IBM-Watson portal. The functionality describes very well the capability of AI, based on neuronal networks. In principle, the ‘Watson’ intelligence rests on a vast variety of unstructured data, available by the internet, as are reports, tweets, messages and other entries. The main capability is to operate on this raw data via less tight semantic analysis processes in contrast to the exact investigation of a classical search algorithm bound to logical strictness. Therefore, a huge amount of internet data of all sorts can be used. The same semantic approach applies for the questions to be asked and the answers are provided to learn additionally. However, there is a point made that internet data are not necessarily correct or relevant. To repair this, the help of a human operator is still required when acquiring data to manually sort out erroneous or insignificant information.

For construction management, first of all, the BIM as a presumably complete representation of the building to be erected is available. The model is created as a highly sophisticated form of planning, i.e. in three dimensions, as physical objects including all of their physical and logical interactions. This is, by the way, treated and assumed to be a strict and logical model, which needs to be failure-free and complete. As soon as construction management comes into play, these physical elements are to be realised and processed on a time axis, optimising project duration and cost without giving up the quality defined as perfect match to the contracts. Yet, since projects are unique, there are no criteria available whether these procedures will be or have been carried out with or without optimal efficiency. The knowledge of performing well in this respect is obviously not that much strict, as present projects are teaching. Otherwise, classical algorithms, such as CPM and Ford, would be solving the given task on a mathematical basis fairly well. Explicitly, the particular elements of a model representing managerial issues, such as detailed contracts and sub-target dates, as well as their organisational interaction are subject to in-situ coordination and therefore principally not available. The laborious and extensive task of coordination itself is defined as a costly service to be delivered during the execution of the project and therefore an element of the model which principally cannot be determined a priori. Thus, exactly the badly required part of the construction management model is not available within the BIM but needs to be modelled otherwise. Where object-oriented approaches work very well for BIM, the potential support of AI algorithms to in-situ coordination is tackled in Section 6.

5.2 Information

The database for this fuzzy knowledge seems to suffer from some difficulties. Absolutely no significant information, regarding neither positive nor negative experience, is publicly available on the net. Knowledge of this kind (experience) is treated as specific asset (know-how) of the companies and therefore deliberately never published. Thus, the existing knowledge is available only within the companies and therefore principally limited in volume. Furthermore, management knowledge is sourced basically on finished projects of the company and on people, i.e. on their specific education. Again this is derived from abstract experience, i.e. academic examples, and structured knowledge as of how to treat situations in a more abstract way via methodical approaches. Both these sources of knowledge are not documented but they bound to the respective persons as human capital. Thus, none of this is principally accessible for analysis by a neuronal network.

Remark: Beyond this well-reasoned situation, further experience worsening the situation is observed and reported by many participants. According to numerous investigations for expertise requests, even the experienced knowledge taken from closed projects is not obviously documented, neither in a structured way nor as unstructured data. Otherwise, according to principles of knowledge management rules the respective projects would not have been running into problems, where an expertise is required. Furthermore, people with this type of knowledge are in particular project managers and construction supervisors who solve the actual problems with higher priority than to secure the knowledge for later projects. However, the coordination part of project management is mainly acting quickly on upcoming situations, leading in many cases to more intuitive reactions and not so much to data-based decision-making. Finally, the failure culture plays a significant role. The knowledge needs to be derived from well-handled projects as well as from well-understood failures.

In order to investigate emergent behaviour via a neuronal network, we observe a statistical problem: there from which emergent behaviour can be investigated. Statistical considerations are strictly limiting the exploitability of data with no exception to neuronal networks. Significance is measured in multiples of the standard deviation which need to be increased by factors according to the sample size based on the Student t distribution. Therefore, a minimum sample size of the order of 102 is required for reliable conclusions. Since the sample size refers to projects or situations of comparable type, the number of indistinguishable classes needs to be fairly low. However, since parameters of construction management are legion, merely no two projects are really to be judged comparable. For a virtual set of, e.g. 10 parameters, which are far too few, with five options (also far too low) each, the number of incomparable situations would already rise to more than 510 = 107. Thus, extracting reliable information from raw data of closed projects would require millions of projects in any case which are not available under any circumstances.

5.3 Organisation

Since it is principally not possible to derive intelligence from experience, the solution is given by breaking up complex projects exhibiting starkly emergent behaviour into a number of smaller units which are becoming less specific and therefore more general and less complex. Therewith, both the availability and the applicability of matching samples are strongly increasing. However the concatenation of these sub-elements is kept simple and linear, and therewith it does not recreate a complex system of simple subsystems. This is elaborated on the basis of Systems Theory and leads to the demand towards expertise to break up complex systems into just complicated systems which are solely formed by the well-known graph-theoretical tree structures or rank-sorted network plans. In particular, this competency is exactly taught to managers as the central methodical approach to solve difficult, i.e. complex (non-standardised local) construction situations based on fundamental knowledge. Setting up an organisation is to develop complex behaviour into complicated behaviour, i.e. investigate separability. The German Standard DIN 89901 defines a specific organisation as a central characteristic of unique projects. Creating a specific organisation corresponds precisely to breaking up complexity into a well-structured, i.e. linearly concatenating set of subunits allowing to be treated separately and thus forming a frame to solve the overall problem. This is accomplished in two steps: first based on separability the structure of the organisation (organisation planning) is created, and second, exactly the so-established organisation coordinates and maintains the separation in detail and on the fly (operation of organisation).

The fundamental precondition to this process is the total knowledge of complexity implying the judgement of interactions between absolutely each element including the resulting derived consequences. This would only be accessible to AI (or other Intelligence) if the systems were described and, thus, describable down to the very last detail. However, this information is principally not available a priori. This situation would provide the conclusion of a fundamentally non-solvable problem, unless the systems were created respectively laid down based on the understanding of separability. Since an organization is specific to a project, it can not be existing a priori but needs to be generated based on the specific situation, be it in advance or during the operation. Since elements and interactions are also not accessible a priori, they need to be developed along and on the basis of general structures. These are termed ‘views’ since they maintain only a small section of the total system, but can be understood, i.e. ‘overviewed’ by the (human) person creating the elements or interrelations to be attached next. Therefore the structures need to be simple and clear, again solely referring to graph-theoretical trees and rank-able network plans. The total system is modelled via a possibly large number of different views, maintaining different interwoven substructures and aspects. Therefore, only understood interactions are modelled, and irrelevant interactions and elements are omitted. There is naturally no prove for completeness existing. Thus, we have less of a task to actively separate existing complex systems into complicated systems, but of generating – compatible with the human mind – the description of a complex system on the basis of separability.

This process exactly represents the contribution of the human mind to the AI process. However, under the given circumstances in construction management, this human contribution seems to be the major part. After having completed the preparation, the remaining task can in fact be easily assigned to algorithmic means as are common.

6. Perspectives for Short-Term Coordination in Construction Management

The previous sections have pointed out that approaching the general problem of construction management, i.e. transcribing complex systems into manageable complicated systems based on the criteria of separability, inevitably requires the contribution of the human mind. This regards in particular evaluating priorities in forming or releasing interactions as an overall higher principle and creativity bringing in elements and interactions which are not derivable from given information but are helpful in the end.

6.1 Service of Coordination

However, given information is principally limited in some areas of construction management leading to the requirement of in-situ coordination. Based on the fact that according to the national law most of the contracts in construction management are ‘Work Contracts’, detailed information about the interaction of specific trades and processes are not predefined and cannot be predicted. Thus, hierarchical organisation structures end at this point and give room and necessity to self-determining substructures. There, in contrast to the criteria of separability, information loops are permitted based on the hope that they will not escalate but stabilise in due time. Since such a substructure needs to have the power to decide in order to end up with operable results, means of stabilising control need to be added as well. This kind of substructure, i.e. the service of in-situ coordination, is essential to construction management.

6.2 Stability of Self-Determined Substructures

Based on the hierarchical structures, all of the detailed rules, interactions and restrictions would be available and formulated as valid for the specified area of the substructure.

However, the resulting behaviour would be non-predictable, unstable, oscillating and probably escalating. Therefore, means of making clear decisions are principally not available, and stable solutions are not necessarily given. If they were available, any modification would ruin this solution and – including the already fixed past – future solutions would have much less degrees of freedom. Thus, newly adapted and altered solutions would be worse than any solution which would have been instantiated before.

Thus, higher principles are required to be followed when setting up the solutions. This would be the best for all participants as a ‘Second Best Solution’ over a ‘First Best Solution’, or in particular decisions based on the criterion of highest flexibility in order to lose a minimum of degrees of freedom when modifications become necessary.

6.3 Second Best Solutions for In-Situ Coordination

Setting up this service of coordination is part of the primary organisation planning task and it mainly manifests in assigning capable people, respective responsibility and a frame of budget and time to work in.

However, coordination has legally no power to decide with respect to the trades since the underlying contracts do not award such power. Therefore, decision-making needs to be to the advantage of all participants to give them reason to comply. Thus, fortunate decisions are required, taking in even restrictions of the players who have nothing to do with the actual project but are decisive for a particular player. Furthermore, ad hoc rules that are probably helpful to the project but ignore or counteract the needs or interests of a participant are not advantageous. Thus, the aim of coordination would have to be helpful and not trying to force, finding solutions and placing the particular project on a higher priority than others interests.

6.4 Random Preferential Relationships and Boundary Conditions

In this more local context, preconditioned positively contributing participants, again all relevant information and restrictions are known. Yet, the number of elements, interactions and boundary conditions has largely increased, i.e. local complexity has strongly increased as well.

This leads to a vast number of options to be considered and chosen from optimising the overall result regarding some criterion be it cost, time or other. This is mainly accomplished manually since many criteria are based on soft principles such as willingness to cooperate, personal preferences or incentives brought in by participation, which cannot be easily formulated for mathematical optimisation procedures. On this background, the large scale is usually done by introducing preferential boundary conditions and relationships from the view of the project itself. Also, in many situations randomly defined relationships are used just to make the problem unambiguous, e.g. for setting up random sequences utilising shared resources. So far, these are ruled by hierarchically enforced arguments and in particular counteract the interests of the participants or at least ignore them. In order to maintain the general incentive to all players, certainly neither random rules nor head-on rules from the project may be introduced. Therefore, the number of organisational options becomes immense and any attempt to manually sort out a single solution which is optimal with respect to any criterion remains inconceivable.

In order to identify really optimised solutions, no additional components can be used to simplify the problem leaving a system comprising a large set of elements subject to an even larger set of interactions rules to be observed.

6.5 Utilisation of AI

As pointed out earlier, in-situ coordination is certainly in no way subject to solutions provided by mechanisms of AI. Yet, algorithms of AI may provide some major support to manually solving this respective optimization problem.

First of all, the described situation can be easily modelled as an object-oriented setup of elements and interrelations and thus it becomes subject to simulation approaches. As long as elements can be formulated as variables qi and interactions as differentiable functions qi = qi(qj) which can be developed into Taylor series, means of solving differential equations are available.

This set of linear differential equations is generally solved by exponential functions of the type

leading to exponentially escalating or dampened, probably, oscillating behaviour. Under the given circumstances no explicit answers can be expected from AI algorithms but at least some information is revealed regarding stability of situations and thus principled sensitivity against modifications and variations. Since the local system at hand is by definition not complete, changes and modifications are to be expected. Thus, the dynamics of the system in close proximity to solutions, i.e. states of equilibrium, needs to be investigated and the stability becomes the crucial criterion to select next steps to be taken in in-situ coordination.

Even if variables and interactions cannot be formulated that clearly in terms of mathematical precision as in the adjacency matrix Ai,j, means of averaging influence like

would characterise the role of a player or a fact within the context of the local system. The strength of impact can also be formulated in more intuitive terms like ranging from ‘none’ to ‘very strong’ represented by just four degrees. This is closely corresponding to the approach. Finally, Active Sums and Passive Sums represent the direct action on or reaction of an element to another. Taking in multistep interaction representing long-time behaviour, the adjacency matrix Ai,j needs to be replaced by the power Ai, jk mirroring the kth order. Understanding higher-order Active Sums and Passive Sums allows for deriving the character of a player or a fact as being actively ruling the system, just reacting on the behaviour, being irrelevant or probably critical for the development. Critical components are in particular reacting strongly on modifications of the system while they are themselves concurrently working actively on the system. Therefore, they are members of numerous loops amplifying the regarding effects and thus leading to escalating or oscillating behaviour. On this background, the elements which are crucial for stable behaviour can be identified and treated, respectively.

Neural networks are basically the same as an object-oriented set, but it include means to formulate the interacting rules (weight) by inherent methods (backpropagation) which is basically a multidimensional minimising problem. This requires a large set of input data together with an expected result. Therefore, no direct support can be expected from neural networks as no qualification of an input data set is available. However, for in-situ coordination, a large set of optional scenarios is available from the combinatorial variation of all variables entered into the system. Furthermore, criteria from all the participants are known liking and to which degree. This is generally based on a multitude of nonlinear, mainly non-differentiable interaction functions and therefore cannot be subjected to analytical investigation approaches. The result is also expected to be formulated on the level of subjective terminology as before and so far corresponds to the concept. On this background, no simple optimisation algorithm would be of help as no analytical correspondence can be formulated. Here, we propose the utilisation of small neural networks with a number of n inputs and possibly n output nodes: The network is to be fed with the set of all possible options formed by the variation of variables (qi) i =1…n and the distribution of agreement for all participants as expected result which is derived from an object-oriented simulation approach (fi) Ɐi =1…n. The simulation will be capable to present results including all non-analytical interaction between varying each variable qi and the concurrent variation of every other variable. Based on this, backpropagation algorithms shape the multilevel linear equation where the coefficients are given by the weights of the neural transactions. The afterwards extracted linear (or at least polynomial with a limited degree) equations represent in a starkly simplified way the effect of the scenarios on the preference distribution with as much nested summation terms as the network levels hold.

Depending on the type of network, sigmoidal trigger functions would be introduced as well for each level, always maintaining differentiable solutions. Exceeding the help of a multilinearised representation of the results in dependence of the scenarios, this approach allows to derive dependency factors like

representing the sensitivity of particular agreement towards the modification of certain variables, averaged over all sources or sinks in accordance to Active Sums and Passive Sums of adjacency matrices. Finally, these values allow for choosing next steps sensibly without losing too many degrees of freedom for further decisions.

On a manual level this is known as the Agile Management Approach which is used if the contractual situation is not covering all the required parameters to achieve the respective goal. Therefore it may not be understood as ignoring contractual agreements but to shape contracts in a way that cooperation for in-situ coordination becomes possible and desirable. In the situation described here, work contracts explicitly pose this kind of a problem.

Conclusion

From the market situation, it seems to be difficult to solve the task of an efficient organisation of construction (Service) based on human intelligence. This is apparently owed to the fact that the behaviour of a project organisation exhibits clearly complex emergent behaviour and therefore cannot be easily predicted by the definition of local rules. On this background, application of AI over human intelligence suffers from some principle problems: The already used and well-established imperative and object-oriented approaches are covering all the areas where clear rules can be established, e.g. based on BIM including Operative BIM, where cost and time are implemented as higher dimensions. However, this is limited to the factually and contractually predetermined and fixed hard facts. The situation changes as soon as it comes to the service of organisation, comprising coordination means to efficiently distribute information and motivation as in distributing incentives, e.g. via contracting. In this context local valid rules are not available, leading to clear miss in directly applying imperative or object-oriented methods. The attempt to make use of neuronal networks to elaborate such rules in a less distinctive way suffers from the lack of widely available data as they are not published. On a company level, available information is too limited in volume to provide statistically significant results. Breaking down the complex situations to be tackled into smaller separable subtasks allows for increase of generality of the situations and therewith the universe increases as well. Taking furthermore into account that those generalised situations are no more specific for a particular company and thus may be published, the database becomes serious. However, this is already done to a very far extent leading to the present situation, where no AI is required to derive valid rules but well-parameterised information are available. Thus, the only remaining difficult task is the beforehand provision of the separated complex systems as a number of less complex subsystems, which serves as a precondition to any manual or algorithmic processing, setting up the specific organisation. However, precisely this preparatory task cannot be principally handed to AI, but the methodical processes to generate these well-separated structures are taught with construction management as a specific competence by universities. Therefrom we conclude the general need for understanding organisation as well-separated structures, answering the transformation of complex situations into just complicated tasks in order to determine the range of AI, i.e. algorithmic, support to this principally human task of understanding a situation and forming a model of it, which inherently implies the solvability.

However, when preparing adequate hierarchical structures the service of in-situ coordination remains as a self-deterministic subsystem which cannot be subdivided into further elements but it is still too complex to be solved manually within acceptable time to a sensible degree of stability. This situation poses the exception, wherever the limit of separability is met. In this context, not so much AI as a substitute to the human mind is expected to be helpful but the algorithms of AI as a support. The methods provided by neural networks and iterative stabilising of object-oriented models at least can lay out the given options and provide parameters mirroring the particular characteristics and roles an element is playing regarding stability. This in the end turns out to be of great help for final manual decision-making on the basis of human intelligence.

Authored by

Udaya Bhaaskar Bulusu, CEO, Integrated Business & Project Management Consultants
RELATED ARTICLES

Most Popular

Hot News