[Blog #3] Baseline simulations are definitely everything...

Here it is, my second-to-last blog post! It is amazing how fast summer goes by each year. 

For the past couple of weeks, I have been continuing the work that I have highlighted on my previous blog post; specifically, I have been further developing the Measurement and Verification (M&V) Tracker and populating it with important site-specific metrics, as well as running optimal simulations for 24 generic chilled water plant models.

In addition to these two main assignments, I have been gaining experience with real customer projects through the OpenBlue Central Utility Plant (CUP) product. As an example, I will be talking today about what I have been doing with one of our customers in healthcare. However, before that, let me give you a brief background of what I am going to talk about...

Using Plant Simulator, the modeling engineers are able to run both optimal and baseline simulations. Now, you might be wondering, why do we need to run baseline, or business-as-usual, simulations to simulate central plant operations prior to optimization? Well, it turns out that there are three answers to that question (I am really forcing the rule of three here). First is that in reality, plant visibility has been and is still a challenge for many sites. This may be due to lack of sensing and monitoring equipment or due to the old equipment not having the capability to communicate readings or failure to save trend data for existing meters. For this reason, running a baseline simulation can help facility managers gain a better understanding of the true cost of their plant operations, therefore helping them with planning, analysis, and to some extent, fault detection. The second reason is the highlight of why CUP stands out as a central plant optimization software: to estimate central plant utility savings. Third and finally, baseline simulations are also important due to plant diversity and complexity. Due to these factors, such as how the plant is operated (manually or automated), different technologies, equipment age/life cycle, piping configurations, and many other plant-specific configurations, it is important that baseline operations are simulated to understand the specifics of a customer site.

Bringing this all together, I realize that running baseline simulations can help our modeling engineers in estimating savings that will be realized if customers decide to implement our CUP software in their central plants. Similar to many other experiments and research, to check for improvements given a change, you need to have a baseline to compare your changes to; same goes for this situation. By running both optimal and baseline simulations on a certain customer site, we are able to estimate how much money a customer can save in running their plants, thereby resulting in considerable electricity savings. In some cases, customers can also see how they can profit from optimizing their central plant operations by viewing how energy policies, incentives, and/or rebates will help give back to their operating costs.

For the customer, I am tasked to help calibrate the existing project model on Plant Simulator such that there will be improvements in baseline simulation results. This is due to the fact that prior baseline simulations generated numbers that were off from measured results, the most optimal of which is the plant efficiency in kilowatts per ton (kW/ton). Simply put, the simulation results did not match the measured annual results. Therefore, this created the need for new baseline models and simulations such that we can get a number much closer to the typical chiller plant efficiency. In addition, from measured trend data and from additional analyses, it is much more plausible to have a number closer to the typical chiller plant efficiency range rather than having a super efficient number.

In order to progress on this task, I have been helping out one of the modeling engineers of the team and reflecting the changes that needed to be made onto the existing project model. Unfortunately for me, calibrating results has not been an easy task (as any of you who have experience in debugging know about). Errors upon errors have come up many times, may that be due to my own user errors (such as forgetting to convert units or not hitting save after making changes to a model, which is definitely why I can't sleep at night, I'm sure) or due to errors coming from the most random bugs that none of us have ever encountered. However, the good news is that although I did encounter these errors and bugs, we managed to resolve them through the help of many of our team members (shout out to the people who replies to my emails as soon as I sent them out), resulting in a new plant efficiency number to calibrate even more.

All in all, the past couple of weeks have been... enlightening; I have re-learned the importance of calibration and debugging, in asking the right questions, and in trying out the many different possibilities that can happen in the process of calibrating your reading/measurement or in the process of debugging an unusual problem.

Definitely a skill that I want to remember or keep practicing as I go back to school and apply it when I work on my school assignments (which will no doubt be the new thing that keeps me up at night, but, I digress).