GNY Use Case - PEAK - Climate Change Predictive Data to Speed Divestment from Fossil Fuels

GNY is now predicting electricity supply and demand in California 5% More Accurately than the Department of Energy. In the next 6 weeks GNY will build off these powerful neural nets to predict the exact date of “peak” fossil fuel consumption for California’s electricity generation as well as build models to show how machine learning powered blockchain technology can solve the reliability problems that prevent large scale adoption of renewable energy for many utilities. GNY’s goal is that these data sets advance conversations that investors are having about the long term potential of fossil fuels to deliver ROI while ruining our planet.



We used 125 weather stations in california to read theses weather features every hour for 2 years
'hourlyvisibility', 'hourlydrybulbtempf', 'hourlywetbulbtempf', 'hourlydewpointtempf', 'hourlyrelativehumidity', 'hourlystationpressure', 'hourlysealevelpressure', 'Hourlyprecip', 'hourlyaltimetersetting', 'dailyheatingdegreedays', 'dailycoolingdegreedays', 'hourlycoolingdegrees', 'demand', 'hourlyskyconditions_CLR'
GNY uses a series of SELF LEARNING and SELF CORRECTING Neural Net Classifiers that we train for accuracy then feed into the LSTM RNN Neural Net Predictor for the business problem in the case below case solr supply. Genies aggregate NN automatically gets rid of useless features to reduce the dimension of our dataset and improves those features that are productive. For example, to predict the mean solar irradiation Wh/m 2 genies LSTM RNN takes into account wildfire activity in conjunction with standard parameters related to solar irradiance and includes azimuth and zenith parameters in the LSTM RNN model is shown significantly improves the accuracy performance. GNY uses a Forward Chaining strategy for cross validation on time series data which is better than standard K-fold. In forward chaining, with 3 folds, the train and validation sets look like: fold 1: training [1], validation [2] fold 2: training [1 2], validation [3] fold 3: training [1 2 3], validation [4] where 1, 2, 3, 4 represent the hour. This way successive training sets are supersets of those that come before them.

OUTPUT: GNY Use Case - PEAK - show 10 % of local solar power supply meets all local demands - Climate Change Predictive Data to Speed Divestment from Fossil Fuels