Tablet PC School Grant Application Review

0
93

Funding exists for good ideas about using a Tablet PC for student learning in school. Submitting a competitive proposal by a school to a grant issuing agency can result in that funding. In response to requests, here are a few observations about the review process a proposal receives.

Each competitive school grant application undergoes at least two evaluations once the funding agency receives it. Applicants should make sure they address these evaluations before submitting their applications. Consider obtaining a copy of a blank proposal reviewers’ score sheet as early in the grant application process as possible. Make sure the proposal addresses each item on that sheet. The best proposals I read used that score sheet as the outline for the proposal.

I’ve reviewed proposals before and after an institution submitted them to funding agencies. I don’t know how many proposals, but enough to see several patterns reviewers, including me, used for both competitive and single source funding. Sometimes a reviewer received a stipend (money for participating in the review) plus reimbursement for costs.

Perhaps these pedantic observations will help applicants prepare competitive proposals for funding to underwrite projects involving a Tablet PC or other ink-enabled mobile computer projects. Expect your proposal to receive a pedantic review. Applicants will also receive a pedantic explanation of the proposal’s status in the funding program.

First Review
The first review by the granting agency consists of determining whether the proposal meets funding criteria. The applicant must also meet qualifying criteria. This may mean that the applicant must qualify as an educational institution. If an individual applies, the applicant would likely receive notice that the proposal does not qualify for funding.

If the applicant qualifies, then granting agency staff will determine if the proposal fits a funding program the funding agency operates.

Granting agencies generally include criteria for these two reviews in their grant application packet. An applicant may also contact the granting agency to clarify whether they and their proposal probably qualify for funding. Many applicants meet face-to-face with the granting agency, although this may not be a necessary step.

Technical Review
When I reviewed proposals for competitive funding, I first received an invitation from the granting agency to appear on a certain date and time in a conference room at an identified place, such as a Washington, DC hotel. The invitation specified for which funding program I was to review, and indicated that I qualified as a specialist in the content of that program.

About 30 of us appeared to review competitive U.S. Department of Education proposals. I knew most of the people from previous contacts or from professional publications. At one meeting, a former student of mine stood to present something to us, stopped abruptly and announced, “Dr. Heiny! I didn’t know you were here.” This sounds informal, but everyone present knew that this was a serious, formal time involving many dollars affecting the lives of uncounted people.

We listened to instructions, received a stack of 30+ proposals in a paper shopping bag with instructions to assemble with a subgroup on the third day. We were to assign points to each proposal and give reasons for these points. The subgroup would then discuss each proposal and rank order all proposals from most points to fewest points. Grants were issued starting with the proposal with the most points stopping down the list when the program had no more money to commit. I took my bag of proposals to my hotel room as did others take theirs to their rooms. Each of us followed two sets of procedures.

General Reviewer Procedures. Each reviewer read each proposal. We compared the presentation in each proposal section with the evaluation criteria set by the funding agency for each section. We also rated each section on a 1 to 10 point scale with 10 points meaning the most credit assignable for that section. We also wrote comments about each section explaining why we awarded those points. At the meeting on the third day, all of us in the subgroup of reviewers recited the points we awarded and our reasons for each section. Before moving to the next section, we reached consensus about how many points to award. Interestingly, major differences seldom occurred among reviewers. The granting agency sent our individual and subgroup score sheets with points and comments to the applicant at the end of the grant process.

How and What I Looked For in a Proposal. I first divided my time among proposals. If I had 30 proposals and 30 hours of reviewing time in the next three days, I allocated one hour per proposal.

I expected to read formula writing. That is, the main difference among proposals was the content, not the writing style or format of the proposal.

I skimmed through the proposals, separating them into piles with the easiest ones to read on top. My standard for best was peer reviewed professional journal publications, the conventional professional standard.

The easiest proposals to read had a coherent, terse abstract, following American Psychological Association publication guidelines (plus my seventh grade English teacher’s instructions; thank you Mrs. Spurr of Roosevelt School, Burlingame, CA for insisting I learn to diagram sentences 57 years ago). Paragraphs started with a topic sentence. Both the abstract and topic sentence described what followed. The second sentence elaborated the subject of the topic sentence. The third sentence addressed the verb, etc.

I read the easiest proposals first. These usually came from universities and professors working with local education agencies. I scored and commented about each section as I read, sometimes wrote notes in the margin of the proposal and corrected grammar, citation errors, etc. as I read, the same way I edited student and professional papers and reports.

I read to answer three generic questions. Frequently, I formulated working answers by the time I read the abstract and quickly surveyed the proposal packet. Several times, I had those working answers after reading the first sentence of the abstract; the writer described operationally the technical purpose, process, outcomes and relevance in 10 to 12 common sense words.

First, does this proposed project comply with authorizing directives or legislation for funding?

To my surprise, some of the projects did not comply. The grant issuing office had allowed peer review to determine compliance. These proposals received low points and recommendations that they not receive funding.

Second, if funded, will this project likely yield proposed results?

This is a tough judgment to make. I relied on the evidence identified in the proposal, what I knew of the professional literature cited in the proposal and beyond, what I knew about similar projects and personnel, what I knew of the people involved from reading their resumes in the proposal packet, and whether the logic used to propose the project made sense, independent from the vision and purpose of the project.

A proposal that provided a straight line of logic between objective and results, supported with objective, empirical evidence that supported this logic, and personnel and an institution that had experience in similar projects received more points than a novice in a new agency proposing to test “a good idea.”

More specifically, I followed the same pattern other reviewers followed. I gave priority to projects that extended what was already known, that showed high probability of advancing student achievement for the least amount of resources, and that included major personnel with accomplishments in the highest rank of scholarship and innovation, whether in education or another arena of informed, disciplined effort in the public, private, or independent sectors.

Third, where should this project rank in order of importance of proposed results compared with other projects I reviewed?

I gave highest ranking to projects that proposed the most and the most immediate student achievement gains for the least use of resources over projects that took longer for student results and used more resources. Planning projects and conferences received lower ranking than changes through direct instruction and learning of students. I also gave priority to projects that proposed to show individual student gains over those that used aggregated (such as group based and standardized) proxies for these gains. I used my familiarity with professional literature about experimental and other smaller-scale project results as benchmarks against which I evaluated proposal citations, activities, personnel, and consultants for making these rankings.

Subgroup Meetings. After reviewing and scoring proposals, groups of five to seven reviewers met for several hours to compare and reconcile our evaluations, one proposal section at a time. Interestingly, we seldom brought in extremely different reviews. Only once did a discussion of differences extend beyond about 10 minutes. The funding agency staffer asked us to reach a consensus on the total proposal. We discussed our observations, scores, and reasons as she listened and made notes. We advised her of our unanimous agreement, which she recorded. By contrast, most scores varied only one or two points across reviewers. Most times we recognized or made the same general comments about each section.

Reviewing proposals submitted by others was an honor. I reviewed many excellent proposals that did not receive funding in my review cycle. Hopefully, reviewer comments helped clarify ways to strengthen these projects. Some did receive funding in other funding cycles and funding programs.

So, make the effort to submit a proposal, if you have “a good idea.” Maybe reviewer comments will help you shape your proposal, so your next reviewer will consider your work easy to read for maximum points toward funding.