a) A simple reflex agent would be perfectly rational only if the current decision can be made on the basis of only the current percept,i.e, the environment is fully observable. Also it msut be possible to reduce the number of possiblities down to a small number. It might work well even when there is dirt only on the initial square or it knows the status of every square in the grid (whether it is clean or dirty; a static environment) b) My robot tries to find out the maximum amount of dirt in the locality and move to that location in order to suck it. But in case there are two or more neighbours which have the same amount of dirt, which equals the maximum amount of dirt, it takes a random move (which accounts for the difference in answers each time we the the robot B). Thus, the robot is not fully deterministic. c) A random agent could perform poorly if the probablity of the robot reaching the grids with more amount of dirt is small. For this, consider a sitution in which there are two square grids, A and B, of size 10*10 bridged path of dimensions 1*8. The grid A and the path are almost clean (very small amount of dirt) whereas B is a lot dirty. The agent is initially positioned in the centre of the grid A. In such a situation we would expect a rational agent to go to grid B and clean it. But for the random agent, the possibility to go to the grid B is small in a fixed number of moves. Thus it will perform poorly. d) Instead of operating on only the current percept, the agent could take a average of last few percept sequences, take their average and then make its move. e) If the square which has become dirty could be cleaned in a small number of moves and the ratio of(dirt on square/number of moves required to clear it) is high,i.e, the gain is more, a ratiional robot must clean it.