Skip to content

Commit a7330d8

Browse files
author
ryan-keenan
authored
Merge pull request #9 from mreichelt/8-add-spaces-for-github-markdown
fix #8 by using correct GitHub Markdown
2 parents 444b140 + e5a2f7f commit a7330d8

File tree

1 file changed

+27
-21
lines changed

1 file changed

+27
-21
lines changed

‎writeup_template.md‎

Lines changed: 27 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
1-
##Writeup Template
2-
###You can use this file as a template for your writeup if you want to submit it as a markdown file, but feel free to use some other method and submit a pdf if you prefer.
1+
## Writeup Template
2+
3+
### You can use this file as a template for your writeup if you want to submit it as a markdown file, but feel free to use some other method and submit a pdf if you prefer.
34

45
---
56

@@ -27,17 +28,20 @@ The goals / steps of this project are the following:
2728
[video1]: ./project_video.mp4 "Video"
2829

2930
## [Rubric](https://review.udacity.com/#!/rubrics/571/view) Points
30-
###Here I will consider the rubric points individually and describe how I addressed each point in my implementation.
31+
32+
### Here I will consider the rubric points individually and describe how I addressed each point in my implementation.
3133

3234
---
33-
###Writeup / README
3435

35-
####1. Provide a Writeup / README that includes all the rubric points and how you addressed each one. You can submit your writeup as markdown or pdf. [Here](https://github.com/udacity/CarND-Advanced-Lane-Lines/blob/master/writeup_template.md) is a template writeup for this project you can use as a guide and a starting point.
36+
### Writeup / README
37+
38+
#### 1. Provide a Writeup / README that includes all the rubric points and how you addressed each one. You can submit your writeup as markdown or pdf. [Here](https://github.com/udacity/CarND-Advanced-Lane-Lines/blob/master/writeup_template.md) is a template writeup for this project you can use as a guide and a starting point.
3639

3740
You're reading it!
38-
###Camera Calibration
3941

40-
####1. Briefly state how you computed the camera matrix and distortion coefficients. Provide an example of a distortion corrected calibration image.
42+
### Camera Calibration
43+
44+
#### 1. Briefly state how you computed the camera matrix and distortion coefficients. Provide an example of a distortion corrected calibration image.
4145

4246
The code for this step is contained in the first code cell of the IPython notebook located in "./examples/example.ipynb" (or in lines # through # of the file called `some_file.py`).
4347

@@ -47,21 +51,24 @@ I then used the output `objpoints` and `imgpoints` to compute the camera calibra
4751

4852
![alt text][image1]
4953

50-
###Pipeline (single images)
54+
### Pipeline (single images)
55+
56+
#### 1. Provide an example of a distortion-corrected image.
5157

52-
####1. Provide an example of a distortion-corrected image.
5358
To demonstrate this step, I will describe how I apply the distortion correction to one of the test images like this one:
5459
![alt text][image2]
55-
####2. Describe how (and identify where in your code) you used color transforms, gradients or other methods to create a thresholded binary image. Provide an example of a binary image result.
60+
61+
#### 2. Describe how (and identify where in your code) you used color transforms, gradients or other methods to create a thresholded binary image. Provide an example of a binary image result.
62+
5663
I used a combination of color and gradient thresholds to generate a binary image (thresholding steps at lines # through # in `another_file.py`). Here's an example of my output for this step. (note: this is not actually from one of the test images)
5764

5865
![alt text][image3]
5966

60-
####3. Describe how (and identify where in your code) you performed a perspective transform and provide an example of a transformed image.
67+
#### 3. Describe how (and identify where in your code) you performed a perspective transform and provide an example of a transformed image.
6168

6269
The code for my perspective transform includes a function called `warper()`, which appears in lines 1 through 8 in the file `example.py` (output_images/examples/example.py) (or, for example, in the 3rd code cell of the IPython notebook). The `warper()` function takes as inputs an image (`img`), as well as source (`src`) and destination (`dst`) points. I chose the hardcode the source and destination points in the following manner:
6370

64-
```
71+
```python
6572
src = np.float32(
6673
[[(img_size[0] / 2) - 55, img_size[1] / 2 + 100],
6774
[((img_size[0] / 6) - 10), img_size[1]],
@@ -72,8 +79,8 @@ dst = np.float32(
7279
[(img_size[0] / 4), img_size[1]],
7380
[(img_size[0] * 3 / 4), img_size[1]],
7481
[(img_size[0] * 3 / 4), 0]])
75-
7682
```
83+
7784
This resulted in the following source and destination points:
7885

7986
| Source | Destination |
@@ -87,35 +94,34 @@ I verified that my perspective transform was working as expected by drawing the
8794

8895
![alt text][image4]
8996

90-
####4. Describe how (and identify where in your code) you identified lane-line pixels and fit their positions with a polynomial?
97+
#### 4. Describe how (and identify where in your code) you identified lane-line pixels and fit their positions with a polynomial?
9198

9299
Then I did some other stuff and fit my lane lines with a 2nd order polynomial kinda like this:
93100

94101
![alt text][image5]
95102

96-
####5. Describe how (and identify where in your code) you calculated the radius of curvature of the lane and the position of the vehicle with respect to center.
103+
#### 5. Describe how (and identify where in your code) you calculated the radius of curvature of the lane and the position of the vehicle with respect to center.
97104

98105
I did this in lines # through # in my code in `my_other_file.py`
99106

100-
####6. Provide an example image of your result plotted back down onto the road such that the lane area is identified clearly.
107+
#### 6. Provide an example image of your result plotted back down onto the road such that the lane area is identified clearly.
101108

102109
I implemented this step in lines # through # in my code in `yet_another_file.py` in the function `map_lane()`. Here is an example of my result on a test image:
103110

104111
![alt text][image6]
105112

106113
---
107114

108-
###Pipeline (video)
115+
### Pipeline (video)
109116

110-
####1. Provide a link to your final video output. Your pipeline should perform reasonably well on the entire project video (wobbly lines are ok but no catastrophic failures that would cause the car to drive off the road!).
117+
#### 1. Provide a link to your final video output. Your pipeline should perform reasonably well on the entire project video (wobbly lines are ok but no catastrophic failures that would cause the car to drive off the road!).
111118

112119
Here's a [link to my video result](./project_video.mp4)
113120

114121
---
115122

116-
###Discussion
123+
### Discussion
117124

118-
####1. Briefly discuss any problems / issues you faced in your implementation of this project. Where will your pipeline likely fail? What could you do to make it more robust?
125+
#### 1. Briefly discuss any problems / issues you faced in your implementation of this project. Where will your pipeline likely fail? What could you do to make it more robust?
119126

120127
Here I'll talk about the approach I took, what techniques I used, what worked and why, where the pipeline might fail and how I might improve it if I were going to pursue this project further.
121-

0 commit comments

Comments
 (0)