2. Deliverables¶
This section defines the project deliverables. Functionality is to be
implemented in the module named floodsystem
.
Milestones and deadlines
Project deliverables/tasks are structured into two milestones. Milestone 1 must be delivered by the interim marking session, and Milestone 2 by the final marking session. You may deliver early by signing off at the Help Desk.
Clarifications
Clarifications can be sought at the Help Desk.
Task completion, interfaces and demonstration programs
Each task requires the implementation of functionality that can be accessed via a specified interface, usually a function signature (function name and arguments, and return values). At the end of each task is a description of a demonstration program that must be be provided. Demonstration programs must have the structure:
def run():
# Put code here that demonstrates functionality
if __name__ == "__main__":
run()
You should expect to run demonstration programs during a marking session.
Important
Conforming to the specified public interface is critical as this will allow the interface team to work independently of your development (and it will allow automated testing of your work).
Testing
Write tests as you progress through the tasks (see Test framework) and add deliverables and tests to the automated testing system (see Automated testing).
Tip
To deliver on a Task, you will often want to implement more functions than just the required function interface. Use additional functions to:
Modularise and simplify your library.
Allow re-use of functions across tasks.
Simplify testing.
As you work through the Tasks, look for opportunities to re-structure code in order to re-use functions.
2.1. Milestone 1¶
Processing of monitoring station properties.
- Deadline:
Mid-term sign-up session
- Points:
4
Caution
Do not use the ‘representative output’ in your pytest tests. Representative output is provided to help you, but would not be part of a real contract. Moreover, you are working with real-time data which will change.
2.1.1. Task 1A: build monitoring station data¶
This task has been completed for you in the template repository.
In a submodule
station
, create a classMonitoringStation
that represents a monitoring station, and has attributes:Station ID (
string
)Measurement ID (
string
)Name (
string
)Geographic coordinate (
tuple(float, float)
)Typical low/high levels (
tuple(float, float)
)River on which the station is located (
string
)Closest town to the station (
string
)
Implement the methods
__init__
to initialise a station with data, and__repr__
for printing a description of the station.In the submodule
stationdata
implement a function that returns a list ofMonitoringStation
objects (for active stations with water level monitoring). To avoid excessive data requests, the function should save fetched data to file, and then optionally read from a cache file. The function should have the signature:def build_station_list(use_cache=True):
The data should be retrieved from the online service documented at http://environment.data.gov.uk/flood-monitoring/doc/reference.
2.1.2. Task 1B: sort stations by distance¶
In the submodule
geo
implement a function that, given a list of station objects and a coordinate p, returns a list of(station, distance)
tuples, wheredistance
(float
) is the distance of thestation
(MonitoringStation
) from the coordinate p. The returned list should be sorted by distance. The required function signature is:def stations_by_distance(stations, p):
where
stations
is a list ofMonitoringStation
objects andp
is a tuple of floats for the coordinate p.
Tip
The distance between two geographic coordinates (latitude/longitude) is computed using the haversine formula. You could program the haversine formula, or you could use a Python library to perform the computation for you, e.g. https://pypi.org/project/haversine/.
Hint
Build a list of all
(station, distance)
tuples, and use the provided functionutils.sort_by_key
to produce a list that is sorted by the second entry in the tuple.
2.1.3. Task 1C: stations within radius¶
In the submodule
geo
implement a function that returns a list of all stations (typeMonitoringStation
) within radius r of a geographic coordinate x. The required function signature is:def stations_within_radius(stations, centre, r):
where
stations
is a list ofMonitoringStation
objects,centre
is the coordinate x andr
is the radius.
2.1.4. Task 1D: rivers with a station(s)¶
In the submodule
geo
develop a function that, given a list of station objects, returns a container (list
/tuple
/set
) with the names of the rivers with a monitoring station. The function should have the signature:def rivers_with_station(stations):
where
stations
is a list ofMonitoringStation
objects. The returned container should not contain duplicate entries.In the submodule
geo
implement a function that returns a Pythondict
(dictionary) that maps river names (the ‘key’) to a list of station objects on a given river. The function should have the signature:def stations_by_river(stations):
where
stations
is a list ofMonitoringStation
objects.
2.1.5. Task 1E: rivers by number of stations¶
Implement a function in
geo
that determines the N rivers with the greatest number of monitoring stations. It should return alist
of (river name, number of stations) tuples, sorted by the number of stations. In the case that there are more rivers with the same number of stations as the N th entry, include these rivers in the list. The function should have the signature:def rivers_by_station_number(stations, N):
where
stations
is a list ofMonitoringStation
objects.
2.1.6. Task 1F: typical low/high range consistency¶
It is suspected that some stations have inconsistent data for typical low/high ranges, namely that (i) no data is available; or (ii) the reported typical high range is less than the reported typical low. This needs to be checked so that stations with inconsistent data are not used erroneously in flood warning analysis.
Add a method to the
MonitoringStation
class that checks the typical high/low range data for consistency. The method should returnTrue
if the data is consistent andFalse
if the data is inconsistent or unavailable. The method should have the signature:def typical_range_consistent(self):
Implement in the submodule
station
a function that, given a list of station objects, returns a list of stations that have inconsistent data. The function should useMonitoringStation.typical_range_consistent
, and should have the signature:def inconsistent_typical_range_stations(stations):
where
stations
is a list ofMonitoringStation
objects.
2.1.7. Optional extensions¶
Display the location of each station on a map (perhaps from Google Maps). Suitable Python libraries tools for this include Bokeh and Plotly.
Explore what other station information is available in the retrieved data. The function
stationdata.build_station_list
is a good place to start. ExtendMonitoringStation
to store any interesting station data as attributes.Advanced: The
MonitoringStation
attributes (station data) are properties of the station and will not generally change. However, we could accidentally and mistakenly change an attribute in our code. For flood forecasting and warning, such an error could have dire consequences. Use property decorators to prevent accidental modification of the attributes.
2.2. Milestone 2¶
The focus of the Milestone 2 is processing monitoring station real-time data to warn of flood risks.
- Deadline:
End-of-term sign-up session
- Points:
8
Caution
Representative output for each demonstration program is provided as a guide. You will be working with real-time data, so the precise output will change with time.
2.2.1. Task 2A: fetch real-time river levels¶
This task has been completed for you in the template repository.
Extend the
MonitoringStation
class with an attributelatest_level
(float
), and implement in thestationdata
submodule a function that updates the latest water level for all stations in a list using data fetched from the Internet. If level data is not available, the attributelatest_level
should be set toNone
. The function should have the signature:def update_water_levels(stations):
where
stations
is a list ofMonitoringStation
objects.
2.2.2. Task 2B: assess flood risk by level¶
Add a method to
MonitoringStation
that the returns the latest water level as a fraction of the typical range, i.e. a ratio of 1.0 corresponds to a level at the typical high and a ratio of 0.0 corresponds to a level at the typical low. The method should have the signature:def relative_water_level(self):
If the necessary data is not available or is inconsistent, the function should return
None
.In the submodule
flood
, implement a function that returns a list of tuples, where each tuple holds (i) a station (object) at which the latest relative water level is overtol
and (ii) the relative water level at the station. The returned list should be sorted by the relative level in descending order. The function should have the signature:def stations_level_over_threshold(stations, tol):
where
stations
is a list ofMonitoringStation
objects. Consider only stations with consistent typical low/high data.
2.2.3. Task 2C: most at risk stations¶
Implement a function in the submodule
flood
that returns a list of the N stations (objects) at which the water level, relative to the typical range, is highest. The list should be sorted in descending order by relative level. The function should have the signature:def stations_highest_rel_level(stations, N):
where
stations
is a list ofMonitoringStation
objects.
2.2.4. Task 2D: level data time history¶
This task has been completed for you in the template repository.
Implement in the submodule
datafetcher
a function that retrieves from the Internet the water level data for a given station ‘measure id’ over the period from the current time back to d days ago. It should return a tuple with the first entry being the sample times and the second entry being the water levels. The function should have the signature:def fetch_measure_levels(measure_id, dt):
Typical use to retrieve the level data at a station over the past 10 days would be:
import datetime dt = 10 dates, levels = fetch_measure_levels(station.measure_id, dt=datetime.timedelta(days=dt))
2.2.5. Task 2E: plot water level¶
Implement in a submodule
plot
a function that displays a plot (using Matplotlib) of the water level data against time for a station, and include on the plot lines for the typical low and high levels. The axes should be labelled and use the station name as the plot title. The function should have the signature:def plot_water_levels(station, dates, levels):
where
station
is aMonitoringStation
object.Hint
Example code to display a plot using Matplotlib:
import matplotlib.pyplot as plt from datetime import datetime, timedelta t = [datetime(2016, 12, 30), datetime(2016, 12, 31), datetime(2017, 1, 1), datetime(2017, 1, 2), datetime(2017, 1, 3), datetime(2017, 1, 4), datetime(2017, 1, 5)] level = [0.2, 0.7, 0.95, 0.92, 1.02, 0.91, 0.64] # Plot plt.plot(t, level) # Add axis labels, rotate date labels and add plot title plt.xlabel('date') plt.ylabel('water level (m)') plt.xticks(rotation=45); plt.title("Station A") # Display plot plt.tight_layout() # This makes sure plot does not cut off date labels plt.show()
Optional: In place of Matplotlib, try using a web-centric plotting library such as Bokeh or Plotly.
Optional extension: Generalise your implementation such that it takes a list of up to 6 stations displays the level at each station as subplot inside a single plot.
2.2.6. Task 2F: function fitting¶
In a submodule
analysis
implement a function that given the water level time history (dates, levels) for a station computes a least-squares fit of a polynomial of degree p to water level data. The function should return a tuple of (i) the polynomial object and (ii) any shift of the time (date) axis (see below). The function should have the signature:def polyfit(dates, levels, p):
Typical usage for a polynomial of degree 3 would be:
poly, d0 = polyfit(dates, levels, 3)
where
poly
is a numpy.poly1d object and0
is any shift of the date (time) axis.Hint
To work with dates as function arguments, e.g. a polynomial that depends on time, the dates need to be converted to floats. Matplotlib has a function date2num that from a list of
datetime
objects returns a list offloat
, where the floats are the time in days (including fractions of days) since the year 0001:import matplotlib x = matplotlib.dates.date2num(dates)
Hint
NumPy has tools for computing least-squares fits to data. The below example computes a least-squares fit for some data points, and plots the data points and the least-squares polynomial:
import numpy as np import matplotlib.pyplot as plt # Create set of 10 data points on interval (0, 2) x = np.linspace(0, 2, 10) y = [0.1, 0.09, 0.23, 0.34, 0.78, 0.74, 0.43, 0.31, 0.01, -0.05] # Find coefficients of best-fit polynomial f(x) of degree 4 p_coeff = np.polyfit(x, y, 4) # Convert coefficient into a polynomial that can be evaluated, # e.g. poly(0.3) poly = np.poly1d(p_coeff) # Plot original data points plt.plot(x, y, '.') # Plot polynomial fit at 30 points along interval x1 = np.linspace(x[0], x[-1], 30) plt.plot(x1, poly(x1)) # Display plot plt.show()
Caution
In the above example, if we changed the
x
interval (0, 2) to (10000, 10002), i.e.:x = np.linspace(10000, 10002, 10)
NumPy prints the warning message:
RankWarning: Polyfit may be poorly conditioned warnings.warn(msg, RankWarning)
This message is warning that floating point round-off errors will be significant and will affect accuracy. In simple terms, the issues is that when we raise a number between 10000 and 10002 to a power, small but important differences are effectively ‘lost’.
This issues arises if we work with dates converted to floats using
matplotlib.dates.date2num
since it returns the number of days since the origin of the Gregorian calendar. The numbers will therefore be large. A way to improve the situation is with a change-of-variable:import numpy as np import matplotlib.pyplot as plt # Create set of 10 data points on interval (1000, 1002) x = np.linspace(10000, 10002, 10) y = [0.1, 0.09, 0.23, 0.34, 0.78, 0.74, 0.43, 0.31, 0.01, -0.05] # Using shifted x values, find coefficient of best-fit # polynomial f(x) of degree 4 p_coeff = np.polyfit(x - x[0], y, 4) # Convert coefficient into a polynomial that can be evaluated # e.g. poly(0.3) poly = np.poly1d(p_coeff) # Plot original data points plt.plot(x, y, '.') # Plot polynomial fit at 30 points along interval (note that polynomial # is evaluated using the shift x) x1 = np.linspace(x[0], x[-1], 30) plt.plot(x1, poly(x1 - x[0])) # Display plot plt.show()
In the submodule
plot
, add a function that plots the water level data and the best-fit polynomial. The function must have the signature:def plot_water_level_with_fit(station, dates, levels, p):
where
station
is aMonitoringStation
object.
Caution
Fitting high-degree polynomials to data is notoriously tricky, especially if the data is not very smooth (as will often be the case with water level data). The problem is that oscillations can appear at the ends of the interval. The is known as Runge’s phenomenon. You can observe this with the river level data by increasing the polynomial degree, say to 10, and the time interval, say to 10 days.
2.2.7. Task 2G: issuing flood warnings for towns¶
Using your implementation, list the towns where you assess the risk of flooding to be greatest. Explain the criteria that you have used in making your assessment, and rate the risk at ‘severe’, ‘high’, ‘moderate’ or ‘low’.
Note
This task is an opportunity to demonstrate your creativity and technical insights.
Tip
Consider how you could forecast whether the water level at a station is rising or falling.
2.2.8. Optional extensions¶
Show all stations on a map, and indicate by colour stations that are (i) below the typical range; (ii) within the typical range; (iii) above the typical range; or (iv) for which there is not level data.
Provide a web-based interface to your flood warning system.
Incorporate rainfall data from http://environment.data.gov.uk/flood-monitoring/doc/reference into your system.
Explore what other data from http://environment.data.gov.uk/flood-monitoring/doc/reference you could use to improve your monitoring and warning system. To start, examine the data that is already being retrieved but has not been used.