Assessment Plan

I chose to implement the “Give Me Five” formative assessment for a question about probability. The activity was about the probability of selecting a marble of a certain color from a bag where there were 2 blues marbles, 3 green marbles, and 5 yellow marbles. The question was, “Are the outcomes in the marble experiment equally likely?” Previously, students had done an activity with dice, where we discussed why the probability of each number was equally likely. I gave students a minute to think about their question quietly, a minute to talk to a partner, and then a minute to write their own answer. Then, I asked for five volunteers to share either their own answer or their partner’s answer. 

I think that the assignment worked pretty well. Students felt very comfortable, especially sharing a neighbor’s response (with permission) rather than their own response. I celebrated any correct part of their answers, and after each answer asked students to share what I liked about the answer or what I might have liked included. In this way, their peers were giving the feedback to the answer rather than me. For example, if a student only said “no,” the class would say that I liked the answer because it was correct, but that the student was missing an explanation of why. Another example was when a student said, “no, because the probabilities of each are different.” In this one, students shared that it was good that that student knew the probabilities were different, but it would be good if the student said why they were different. A third student said, “Yes, because there are more yellow marbles.” Students to this response said that the student understood that probability related to how many marbles there were, but didn’t understand that different numbers of marbles meant that the answer would be no.

If I did this activity again, I think I would have students write down their answers so that I could select different options. In some of my periods, students gave very similar answers to whoever answered first.

I use #34, No-Hands Questioning, daily in my class. I have found it to be the most effective for keeping students on task. Sometimes I remind students that I am more likely to call on them if I see them not writing, because I will assume that they already have the answer written down. This makes sure that students are on task and at least asking me or a peer for help the entire time. I use this as an alternative to my popsicle stick questioning and do indeed select students who finish writing/never started to answer the question. The more I did this, the fewer students I had not trying problems and thus the fewer students I had not engaging in productive struggle. I make sure to alternate students who actually had an answer quickly with students who were off task. Some of the off task students, when asked the question directly, could come up with a correct answer on their own. Others I have to walk through the process and I do so with help from other students near them. It also helps me not to call on the same eager students for every question.

The biggest challenge with this formative assessment is that my eager students raise their hands to “take over” the answer for the students who were off task. I hate not being able to let them answer because I want to encourage the math excitement, but unfortunately when I choose this method, my goal is to reach the disengaged students. Sometimes the students who already get the material get impatient with the process, so I try to only do it about once or twice a lesson. It really helps to hold students accountable. 

What works well is that through the repetition and classroom norm of repeated use of this formative assessment, students can expect this kind of classroom questioning. Now that they anticipate my use of this assessment, students are a lot more likely to productively struggle through a question and it holds them all accountable for their own learning, since they can’t rely on a peer to “rescue” them when it is their turn to answer.

Popsicle Stick Questioning, or #44, is my favorite formative assessment. I have been using it all year. On Day #1 of my class, students got to decorate a popsicle stick with their name on it and I used it as a get to know you activity. I randomly drew students and they had to share why they chose to decorate their popsicle stick how they did and something about themselves. They enjoyed the activity and I got to associate names with faces. Then, in my regular classroom, I use it as a way to make sure that I call on every student as equally as possible. I have two cups and as I go through the names, they get put into the other cup so that they don’t get called on again until every student has been called on. It holds everyone accountable and makes sure that I hit my high, medium, and low students. As the year progressed, I began to know eye-catching details of my high students in each class, so if I had a tough question coming up, I could conveniently select them. On occasion, I do have to adjust my question depending on which popsicle stick I draw based on how well the student has been engaging. 

I think the biggest thing that I have to work on is trying not to adjust based on who I select. I worry that sometimes I “dumb down” my questions too much for my low tier students, but usually when that happens, I just give them a few questions in a row instead of one. 

My most successful use of this is when students have an assignment to practice procedures, or work in groups. I can use popsicle sticks to assign groups, and then assign the groups a question (or individuals a question). The students selected are a group leader. The question assigned to the group or individual is responsible for explaining to the class how they solved the problem. It works the best for review days. I like it because it takes a lot of the direct instruction pressure off of me and makes the students responsible for their own learning. Plus, students can really see what they understand and what they don’t–and they feel like they have a manageable amount of work for their short amount of time to solve the problem.

I really enjoyed using #63, or Thumbs Up, Thumbs Down, during my inequalities unit. A big part of inequalities is obviously graphing them once we had a solution. Students understood how to solve inequalities alright, but when they got to the answer, they commonly struggled with what kind of circle to use (either open or closed), and what direction to share on the number line. This formative assessment helped with both, since it was a binary choice each time.

When graphing, once they had had a good deal of class time instruction on the matter, I began asking the class to thumbs up for a closed circle (because it meant the value was included in the solution) or thumbs down for an open circle (since it means that the value was not included in the solution). This emphasized the meaning of the circle and gave me a way to ask one of each option why they gave that answer. As I used the activity more often, students as a whole were correct more and more often. 

I also used the activity for shading left or right. Thumbs up was to the right, since that meant numbers were bigger than the variable, and thumbs down was to the left, since numbers were less than the variable. I had a lot of success with this mindset, since it worked regardless of which side of the sign the variable was on.

Using this formative assessment for these concepts I think really emphasized the conceptual understanding of what an inequality is. One downside is that sometimes students would look at a student who usually got it right and began copying their thumb because they were afraid of being wrong. I adjusted by giving students 30 seconds to think without putting a thumb up, and then asked them to put their heads down. Once no one could see each other, I asked for their thumb answers. They were allowed to hide their thumbs before heads came up, which reduced the classroom shame. Then, as a class, depending on group understanding, we could discuss both options and I received more honest feedback. 

3 to 5 main objectives for the chapter:

  • Students learn vocabulary related to probability. They calculate the probability of simple events and their complements and express the results as fractions, decimals, and percents. Students also place probabilities on a number line from 0 to 1. Standard: 7.SP.5
  • Students analyze an experiment where the outcomes are not equally likely. They calculate each outcome’s probability and construct probability models. They use the models to determine probabilities. Standard: 7.SP.7
  • Students experiment with probability. They flip a coin, roll a dice, and spin a spinner. They learn that as the number of trials increases, the experimental probability approaches the theoretical probability. Students use percent error to compare the probabilities. Standards: 7.RP.3, 7.SP.6, and 7.SP.7b
  • Students create simulations for real-world situations and run the simulation to anticipate real world values. Students display their simulation results in a table and interpret the results in the context of the situation. Students compare their results to real world data and emphasize that as the number of trials increases, the experimental probability approaches the theoretical probability by seeing large amounts of data–specifically the number of babies born in 2022, where they performed a simulation of flipping a coin to determine the number of female babies born. 

Additional Formative Assessments used during this module in class that students did NOT turn in, but I as a teacher gained valuable feedback based on student knowledge:

  • I used a strategy harvest (#51) when students had to set up and solve a proportion to determine the estimated result based on a given probability. For example, if students were told that 7/20 students at a middle school were in seventh grade, how many students would be in seventh grade if there were 100 students. As a class we discussed 3 different ways to set up and solve the proportion. Then students had a different problem. They had to solve it with their strategy, then find a partner and solve it with their strategy. Then they had to solve it with a different strategy than either themselves or their partner. In the last box, they had to write a paragraph explaining which strategy that they prefer and why.
  • I used the opposing views probes (#36) where I predicted the probability in three ways. Two were correct with different methods and one was incorrect–it had a correct method but an incorrect answer from a common error. Students had to write what the student did for each method, whether it was correct or incorrect, and if it was incorrect they had to explain the error and solve it correctly given that method.

These assessments were both turned in on a blank sheet of white paper. In the future, I think I would make a document and give them copies of it to submit. 

5 practices assignment:

I implemented an activity designed for students to see that probability is just a prediction. My goal for the lesson was to have probability make sense in a tangible way, have students see that probability doesn’t always align with reality, and to teach the concept that the more trials you have, the closer the results align to calculated probability. 

I began the lesson by reviewing ratios, emphasizing that they are just fractions, defining very clear language so that ratios feel more approachable, and reviewing the procedure to turn a fraction or decimal into a percent.

Then, switching between classroom discussion and group discussions, students pieced together the odds of rolling a six on a six-sided dice, and the odds of rolling either a five or a six on the same fair dice. The scenario we used to make it feel more realistic was a board game–the example I gave was Risk, for those who had played it before–where the general consensus was that the more high rolls you have, the more likely you are to win the game. Eventually, students correctly calculated that the odds of rolling a six was about 16.67%, and the odds of rolling either a five or a six was 33.33%. 

Then, I asked for six volunteers. My students are pretty excited any time I ask for volunteers because I have a history of it being for fun, so I tried to choose students who historically don’t engage as often, my regular sleepers, and other students that I suspected would struggle more with the concepts. Each volunteer got a marker and went up to the whiteboard. They each got one number from one to six that was their responsibility for the activity. 

Students in the audience had to have a piece of paper. The six volunteers chose a peer to be responsible for keeping records for them, too. Then, using a random number generator on my projector, I simulated a dice roll. The student in charge of the number that was the result made a tally mark under their column, and I led a class discussion about how the “real” results of our one roll was that one number had occurred 100% of the time, and the others 0%. Then we kept adding more and more simulations, calculating percent that each number had been rolled from our data, and we watched the numbers get closer and closer to our benchmark.

Students had to hold onto their piece of paper, and at the beginning of class the next day, I had the data from all of my sections of math–thus quadrupling the number of trials–and in each section we compared that section’s data to all of my sections’ data, noting key differences and noting how the actual result was getting closer to our anticipated probability the more trials that we did. 

Learning goals

-what was your learning goal(s) for the lesson? Was the task interesting to students? Was it at the right difficulty level or had enough scaffolding?

My learning goal was for students to understand that probability is a prediction, and that the more data there is, the more likely it is that the outcomes in the data will reflect the prediction. I had multiple students in each section say that it was their favorite day with me all year. Many, many students also told me that they understood what probability was, and that they understood percentages and what they tangibly meant better than they did last semester during the percent module. Having volunteers to keep track of numbers kept the entry point low, and the classroom discussions and higher level questions elevated the lesson to have a high ceiling.

Anticipating

-What student responses-correct or incorrect- did you expect in this lesson? How do you see these ideas connecting to the mathematical concepts you would like students to learn? Would you do anything differently in regards to anticipating with this activity next time?

I anticipated that some students would be confused at the beginning for how one number could be 100% while the rest were 0%, but that didn’t actually come up. I also anticipated that students would incorrectly calculate their percentages by only moving one decimal, but not as many made this error as I expected. Many students calculated things correctly, and many students by the end of the lesson saw how real life trials reflected the math that we calculated at the beginning. This lesson set up students for success in the next lesson about randomly selecting marbles when there were different numbers of different colored marbles in a bag. I honestly had so much success, I think the only thing I might change is creating a worksheet instead of having students fill out their own piece of paper.

Monitoring

-How did you monitor students in this lesson? How did you ensure all students were engaged? Do you feel that you asked students good questions while they worked? Would you do anything differently in regards to monitoring with this activity next time?

During the group discussions I went around the room. During the activity itself, when I noticed an audience member was not engaged, I asked them to guess which number they thought I would roll next, which made them invested in the next result. If they were right, I praised them for their luck, and if they were wrong I asked how many rounds they thought it would take to get their chosen number. This technique made sure that audience members were keeping tallies on their papers with the volunteers throughout the activity. It didn’t take long to generate the data, too, so students didn’t have to be bored by the monotony of it. I do think that I asked good questions. Students drew the conclusions themselves with little prodding and thought that it was really cool how the data reflected what they calculated would happen. I think my pre-planned method to keep students engaged worked well. I probably could have had another volunteer randomly draw the data so that I could walk the room while they worked. 

Selecting

How did you select student responses in connection with your goal for the lesson? Would you do anything differently in regards to selecting with this activity next time?

I took the positive feedback most strongly. Most students had very similar answers at the end of the lesson. I asked them to explain in their own words what probability was, and many without hinting included something about it being different from reality. About half also included something about how with more data, our calculations are revealed to be more accurate. I think the only difference that I would make would be to save time at the end to discuss student responses. 

Sequencing

-What was your rationale for the sequence of student ideas that you used? Would you do anything differently in regards to sequencing with this activity next time?

I shared some student responses the next day. I started with a very complete answer and described what I liked about it. I asked students if there was anything to add, and my classes could come up with some things. When I then read a less complete answer, I asked my students what I liked about the answer. Then I asked them what that student could have included. I chose different responses in each class, and actually chose to choose responses from other classes so that my students didn’t feel like their answers were bad. I told my students that they were from other classes, too, so they didn’t feel bad being very harsh.

Connecting

-How did you connect different solutions or ideas to the key mathematical ideas in the lesson? Would you do anything differently in regards to connecting with this activity next time?

I mostly tried to get really excited about the student responses and reiterate my key ideas once again. I also tried to ask questions that would relate to the next lesson like “How would this change if instead of a three, there was a second two?” and other probing questions to get students thinking about there being different probabilities for different numbers of different options. It really helped to prepare students for the marble activity in a future lesson. I also connected it well to last semester’s module on percentages. I don’t know what else I could have done to connect it even further, but I am sure there is something!