Page 197 -
P. 197

comprehending data


           Iterate to remove duplicates


           Processing a list to remove duplicates is one area where a list comprehension
           can’t help you, because duplicate removal is not a transformation; it’s more of
           a filter. And a duplicate removal filter needs to examine the list being created
           as it is being created, which is not possible with a list comprehension.
           To meet this new requirement, you’ll need to revert to regular list iteration
           code.






                              Assume that the fourth from last line of code from your current program is changed to this:

                                     james = sorted([sanitize(t) for t in james])
                              That is, instead of printing the sanitized and sorted data for James to the screen, this line of
                              code replaces James’s unordered and nonuniform data with the sorted, sanitized copy.
                              Your next task is to write some code to remove any duplicates from the james list produced
                              by the preceding line of code. Start by creating a new list called unique_james, and then
                              populate it with the unique data items found in james. Additionally, provide code to display only
                              the top three fastest times for James.
                              Hint: you might want to consider using the not in operator.


































                                                                                      you are here 4    161
   192   193   194   195   196   197   198   199   200   201   202