دانلود مقاله ISI انگلیسی شماره 4661
ترجمه فارسی عنوان مقاله

ابزار برای کنترل کیفیت در شبیه سازی

عنوان انگلیسی
Tools for quality control in simulation
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
4661 2001 8 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Building and Environment, Volume 36, Issue 6, July 2001, Pages 673–680

ترجمه کلمات کلیدی
ساختمان های تجاری - ساختمان های مسکونی - شبیه سازی - انرژی - تضمین کیفیت - فن آوری های وب - بررسی کاربری
کلمات کلیدی انگلیسی
پیش نمایش مقاله
پیش نمایش مقاله  ابزار برای کنترل کیفیت در شبیه سازی

چکیده انگلیسی

This research examines the nature of environmental design information used by building designers. The goal is to identify commonality in the types of information that design tools should produce. It presumes that improving the use of design tools will lead to improved building performance. Through practitioner interviews, it investigates application of design decision support tools by building designers. It proposes a means of increasing designers’ use of these tools. This proposal derives from observation that systematic quality assurance (QA) systems are seldom used with simulation-based tools. The proposal is a QA system comprising (a) a simulation veracity test akin to the Turing test of computer intelligence; (b) an internet database of building performance information; (c) post-analysis tools that define the reliability of design tool output.

مقدمه انگلیسی

This research [1] asks: what is the nature of the environmental design information sought by building designers? It assumes that improved building performance is the goal of all designers. For the research, we interviewed users of building environmental design decision support tools. The goal was to identify whether common needs exist across different environmental design topics for particular types of tools and information. All environmental design decision support tools are “simulations” of some imagined reality. These simulations can be charts of building performance versus window size in a solar house design handbook; an R-value calculation; a wind tunnel test; or use of a computer to predict the performance of a building. Complex computer programs like DOE2 [2] and RADIANCE [3] that are conventionally known as ‘simulation’ programs, are just more detailed, and potentially more realistic simulation tools.

نتیجه گیری انگلیسی

Not only for senior partners in architecture firms but for all users of performance prediction software, the single greatest need right now is for QA mechanisms. The interviews described above have shown that all users require some means of ensuring that the model they have created with a design tool represents the real building. If performance prediction programs contained good QA tools, then architects’ interest in the environmental quality of their buildings would naturally drive the use of this software. The problem for these architects is that there is no independent measure, no benchmark to legitimise the output. We can measure something as simple as an R-value calculation against a benchmark in a code or standard. Yet even for this R-value calculation guaranteeing quality is difficult. No systems exist for independently verifying the calculation, apart from repeating it carefully and comparing the results of the two calculations. The issue is not the precision of each number but the accuracy of the relationship between the numbers and the reality they represent. One cannot easily scan the output and see in it that something is inconsistent or illogical. Improvement of QA procedures for environmental performance prediction will make performance prediction design tools more accessible not only to the professionals who currently use them but also to those architects who currently avoid them. Design simulation requires building designers to develop a mental model of the relationship between the real world and the information they are feeding into and getting back from the simulation. The quality of this mental model determines the quality of the information that they can obtain from the simulation. If designers do not understand the simulation process, they cannot easily use the simulation results to inform their design. The conscientious but uninformed user will have a series of numbers and a set of concerns about their meaning and reliability. There is an associated danger that the casual but uninformed users will have a series of numbers they trust unreservedly. 5.1. Quality assurance — a process It is not easy to devise a QA process for simple formulae-based simulations such as an R-value calculation or a reverberation time (RT) calculation. QA of these calculations inevitably degenerates into a process of checking and rechecking the numbers entered into the formulae against their “book” values. Creating a QA process that ensures the checking of what those numbers represent is not so easy. QA for computer-based calculations still requires that this type of foundation work be done — validation and calibration are always necessary. The formulae and the data values entered must be checked against book values. Validation is normally the role of the writers of the program, and only needs to be done once, when the tool is first compiled. Calibration is required each time the software is set up by a new user. QA in simulation software should allow the user to check the relationship between the performance predictions and the actual building design. A reduction in architects’ reluctance to take responsibility for the predictions of simulation software is likely if simulation software produces reports in language used by them. The principal problem is how to establish a QA system for calibrating the output of a simulation program to ensure that its predictions represent reality. What is needed is a test for the output from a simulation program like the Turing Test[14] for the “existence” of computer-based intelligence. The goal is to test theoretically the output of any environmental simulation program. Turing introduced the Turing Test as ‘the imitation game’ in his 1950 article where he considered the question “Can machines think?” In the ‘modern’ version of the test we connect an interrogator to one person and to one machine via computer terminals. The interrogator's task is to find out which of the two candidates is the machine, and which is human, only by asking them questions. If the interrogator cannot decide within a certain time then the computer is ‘intelligent’. What is required is a QA test where the user asks whether the data in front of them representing a building's performance are from a real building. Like the Turing Test, this test requires a minimum of three ‘players’. Player A asks questions; players B and C answer from their data. B has data on a real building. C has simulation results. If interrogator A cannot distinguish simulation from real then the simulation quality is assured. The problem with this simple QA idea is the same problem as affects the Turing ‘Test’: how to operationalise it. Very few offices can afford to have three people working on simulation and its QA. Also, if real building performance data are available, why simulate? In practice, it is likely that the test will involve only one person with the computer taking the other two roles. Where possible, the interrogator should not be the simulationist because it is too easy for self-assessment to become self-congratulation. 5.2. Quality assurance — a program In computer-based simulation a post-processing program or utility would play the part both of the ‘player’ with simulation data and of the ‘player’ who has a real building. Real building data are likely to be a combination of case studies constructed from monitoring programmes in real buildings and from structured parametric runs of the simulation program itself. What is essential to the operation of this QA process is an independent database of this information. Using this approach, an independent database of building performance simulated and real data ought to be available for calibrating each computer simulation package. Clearly, the authors of the program are not generators of such data. Finding relevant benchmark performance data for the climate and building type one is simulating will remain a complex information search task. For this reason, it is proposed that the database ought to be Internet accessible and able to be added to by submissions from all users. A model could be the CDDB.4 The key to making this work is a unique identifier like the Internet web page URL. This would be assigned to each building, so that people could upload and download ‘cases’ from their own set of real buildings. The database itself need not be too specific. Its primary content would be pointers to the locations for the full data on each case building. The pre- and post-processor for a simulation program would access this database. Prior to simulation, the database would find similar buildings and inform the simulationist that: (a) precedents already exist for buildings of the type planned in climates like that selected and (b) from those precedents, relevant simulation program input files exist describing buildings of this type. 5.3. Quality assurance — an XML program? The basic question posed by this QA test is a truism: The changes in the predictions of a simulation program following changes in building design should always be of the same scale and nature as those perturbations in performance observed in reality. This obvious “truth” is one that most simulationists would agree is necessary. For example, what use is there for a formula for calculating reverberation time (RT) of an auditorium if it only applies to the size of auditorium for which it was derived? Or: if I place a carpet on the floor in my auditorium, does the direction (reduction) of the change in RT follow what happens in other auditoria? The essential requirement of a computer program that performs this role of being the intelligent “agent” advising the designer about each step in the design process is that the agent/program understand the data it is working on. All these qualities of the database point in the direction of the “semantic web.” Most databases in daily use are relational databases — databases with columns of information that relate to each other, such as the temperature, barometric pressure, and location entries in a weather database. The relationships between the columns are the semantics — the meaning — of the data. These data are ripe for publication as a semantic web page … the resource description framework (RDF) which … is based on XML … allows computers to represent and share data just as HTML allows computers to represent and share hypertext. In fact it is just XML with some tips about which bits are data and how to find the meaning of the data [15]. The key for this proposal with the semantic web is that a document contains not only the data but the links or references to the places on the web where a computer program can find “how to convert each term in the document it does not understand into a term it does understand.” With the appropriate RDFs an XML document describing lighting performance measurements in an office building in Los Angeles might be used to create a realistic Radiance daylight simulation for San Diego this week; and next week it might form the basis of a DOE2 analysis of the impact of daylight on cooling equipment energy use in a LA doctor's surgery. There are many advantages to this web-based approach. The most obvious is the accessibility of the data. Instead of a single database with a single structure which requires many years of negotiation to define each time a person sets up a new file or measures a new building, it can be put on the web as another “datapoint”. All that needs to be done “centrally” is to provide a means of finding the data. This is where the concept of the uniform resource identifier5 is extremely significant. The most common form of URI is the Web page address, which is a particular form or subset of URI called a uniform Resource locator (URL). A URI typically describes: (a) The mechanism used to access the resource (b) The specific computer that the resource is housed in (c) The specific name of the resource (a file name) on the computer. What is required for buildings is a URI that adds to the URL or web address of the institution storing the building's performance data. In short, all it needs to be is the URL. If each dataset is placed in cyber space with its own built-in RDF definitions, in an XML document, then useful searches by a pre-processor could be constructed such as: “find all the mild climate office buildings monitored in the past 10 years for which lighting measurement and energy consumption figures are available”. A similar search concentrating only on buildings for which energy use data are stored might be used by the energy performance simulation post processor to find information to calibrate its predictions. The simulation package authors do not need to do a complete analysis of the knowledge representation required to construct a computer-based ‘product’ model of a building[16] and hence of the translation of their input data into that model format. Rather, they need to provide a link from the program user to the RDF for their program. Inference engines6 developed by them or by others will provide the link to relevant data in other people's data formats. To paraphrase Berners–Lee: machines can give the appearance of thinking by answering questions that cause it to follow the links in a large database. The database of relationships might be structured like: a building is a thing, a house is a building, a door is a thing, a building has at least one door. To create a useful database of this type is a huge task and typically has room for only one conceptual definition of a house. The web defines only one page at a time, not a whole system. The goal of the semantic web is to allow different sites to have their own definition of “house” and to develop an “inference layer” to allow machines to link definitions. RDF's are the inference layer. A further major advantage of the semantic web application of this approach is, in Berners–Lee's terms: evolvability. If an RDF exists for the input files for a program like DOE2 [17], then when an old version encounters a file from a newer version it can look up the relevant RDF for the new version to find the parts of the new file it can “understand”. The process of expanding the use of these QA tools then is one of evolution, and requires very little by way of international or inter-disciplinary standardisation. It carries within it the RDF tools that permit adaptation and machine learning, written in the only part that needs to be standardised — XML. 5.4. Quality assurance — inferences A considerable advantage arises from the XML/RDF split in the presentation of data — on the web or anywhere else. This is the reasoning — the rules that define the relationships between parts of a building are explicitly removed from the simulation program revealing the reasoning behind the analysis very clearly. This separation has several benefits when seeking to apply a QA process in simulation. First, an aspect of simulation that the new analyst often finds puzzling is determination of the appropriate external environment to “apply” in a simulation. What analysts debate is how to characterise the ‘typical’ external environment. Is it an average day/week/year? What might the risk to the building owner or operator be if the normally expected variations around the average occur from year to year? Stochastically valid risk analysis is essential in all QA procedures related to building performance simulation. In an XML system the weather data for a thermal or lighting simulation would contain the RDF definition of the meaning of its terms. This would enable a different XML-aware simulation to translate the columns of weather information to a format compatible with its own views of the world. It would also mean that each weather file would contain synoptic information on how typical it was which could then be used by the simulation package to construct atypical weather scenarios. A second and often-overlooked aspect of the external environment is the operational environment. The designer needs to know just how vulnerable the simulated performance will be to variations in the way we occupy or operate the building. If we no longer operate the building as we assumed it would be, what might the performance consequences be? XML format data on the energy performance of other real or simulated buildings would contain data about the data (Metadata) in the file. This would describe the context for the measurements and hence permit the XML front end of the simulation package to infer how “typical” the usage patterns are and hence how much they might be tweaked to test how sensitive the simulation output is to realistic variations in the assumed usage patterns. Finally, the increased complexity of modern computer-based building performance simulation tools has not rid the design profession of its traditional problem with these tools: that they evaluate completed designs. Guidance about how to move forward in improving a design typically comes only from the informed user looking backwards at how the existing design performs. An XML front end to a design process such as modelling a building in CAD would look up post occupancy evaluation (POE) contributions to the Internet database. It might even generate initial design ideas based on successful precedents. This research defines a development path for the next generation of design tools. It assumes that the next generation of design tool will be more detailed computer programs. It also assumes that simulation programs like DOE2 and RADIANCE that a few expert “simulationists” currently use will increasingly be a part of the building designer's repertoire. QA will be a significant part of that future.