Sunday, August 29, 2010

Cross-Cultural Research on Human Development

Entire issue on cross cultural human development. Behind pay wall, I'm afraid, but you can see the abstracts. Haven't read it but a summary of some of the material is on Neuroanthropology. Haven't read it yet, but it looks fascinating.

Saturday, August 28, 2010

Problems with the use of Student Test Scores to Evaluate Teachers

originally posted at Daily Kos

If new laws or policies specifically require that teachers be fired if their students� test scores do not rise by a certain amount, then more teachers might well be terminated than is now the case. But there is not strong evidence to indicate either that the departing teachers would actually be the weakest teachers, or that the departing teachers would be replaced by more effective ones. There is also little or no evidence for the claim that teachers will be more motivated to improve student learning if teachers are evaluated or monetarily rewarded for student test score gains.


That is a quote from the Executive Summary of one of the most important policy briefs about education in recent years. At a time when the Dept. of Education is pushing to tie teacher evaluation and compensation to student test scores, this Economic Policy Institute Briefing Paper (whose title is the same as this diary, and which is a pdf), pulls together the extensive relevant research that demonstrates the dangers of pursuing such a path. Please continue reading as I explore this important document, released at 12:01 AM today, August 29.

First, let me clarify several things.

This is a very long diary. That is because I am trying to reasonably thoroughly cover the contents of an extremely important document. My purpose in doing so is to convince people of the document's importance. Thus I will be perfectly happy should you decide you do not need to further read what I have written below. You can follow the link for the brief (which I have provided you again), download the pdf, and begin reading. The executive summary is only four pages. The brief itself, without the critical apparatus of footnotes and sources, another 17. So if you want, one more time follow this link.


This document has been in the works for several months, and was NOT hurriedly put together as a response to the recent series by the Los Angeles Times which used value-added assessment to label teachers in the Los Angeles Unified School District. Second, the ten scholars whose names are on the document are some of the most eminent in educational circles, including among their midst former Presidents of the American Educational Research Association and the National Council on Measurement in Education, two of the three professional organizations most involved with psychological measurement, of which school-related testing is a subset. One of the scholars, Robert Linn, has not only presided over both of those organizations, he has also serve as chair of the National Research Council's Board on Testing and Assessment. The group also includes the immediate past president of the National Academy of Education, Lorrie Shepard, Dean of the School of Education at Colorado. A brief and applicable curricula vitae of each of the ten authors can be found at the end of the document, and briefer descriptions at the beginning, where each author is listed, along with the following statement:
Authors, each of whom is responsible for this brief as a whole, are listed alphabetically.
An email address is provided for further contact.

The ten authors, alphabetically, are as follows:
Eva L. Baker
Paul E. Barton
Linda Darling-Hammond
Edward Haertel
Helen F. Ladd
Robert E. Linn
Diane Ravitch
Richard Rothstein
Richard J. Shavelson
Lorrie A. Shepard

Let me be blunt. I do not know how anyone who knows the work of these scholars and who reads this brief can accept the idea of placing any stakes as to firing or awarding of merit pay based on the current status of Value-Added Assessment methodologies. The document is thorough. It reviews all the relevant studies, including one not yet in print. Those includes studies by Mathematica for the US Department of Education: by Rand: by the Educational Testing Service; done for the National Center for Education Statistics of the Institute of Education Sciences of the U. S. Dept. of Education; issued by the Board of Testing and Assessment of the Division of Behavioral and Social Sciences and Education of the National Academy of Sciences, and so on. There are citations from books, from peer reviewed journals.

I am not a scholar. I am a high school social studies teacher. During now abandoned doctoral studies in educational policy I got interested in value-added assessment and devoured what studies there were in the educational literature. I also talked extensively with the technical person for one organization that offered a value-added methodology who cautioned me that the approach was not stable enough for it to be used as the basis for decisions with any kind of meaningful stakes. That was about a decade ago. What I had read since, and what I have absorbed from this study convinces me that the situation is not significantly better now.

But you do not have to take my word for it. Let me offer a few key examples from the study. Those who follow me on Daily Kos already have seen in the study by Mathematica the high rate of error in determining superior and inferior teachers beyond the broad middle. In this diary, written on August 27, I noted that the error rate with 2 years of data was 36%, with 3 years 26%, and even with 10 years of data still 12%.

But that is just the tip of the iceberg of the technical problems with using such an approach.

Without recapitulating the entire brief, let me offer a couple of other key points.

1. Results for individual teachers are not stable:
One study found that across five large urban districts, among teachers who were ranked in the top 20% of effectiveness in the first year, fewer than a third were in that top group the next year, and another third moved all the way down to the bottom 40%. Another found that teachers� effectiveness ratings in one year could only predict from 4% to 16% of the variation in such ratings in the following year.


2. One key question is whether one is really accounting for teacher effects and excluding other influences in the results one gets from value-added assessment. Jesse Rothstein reported something interesting, about which I quote from the Executive Summary:
A study designed to test this question used VAM methods to assign effects to teachers after controlling for other factors, but applied the model backwards to see if credible results were obtained. Surprisingly, it found that students� fifth grade teachers were good predictors of their fourth grade test scores. Inasmuch as a student�s later fifth grade teacher cannot possibly have influenced that student�s fourth grade performance, this curious result can only mean that VAM results are based on factors other than teachers� actual effectiveness.


3. The brief notes that arguments that the private sector evaluates professional employees using quantitative measures that are parallel. The authors of the brief point out that rarely are such quantitative measures the sole or even the primary factor, noting that management experts warning against using such measures for making salary or bonus decisions. They remind us that some of the distortion on Wall Street was the result of emphasizing short term gains that could be easily measured. They also touch on medicine:
In both the United States and Great Britain, governments have attempted to rank cardiac surgeons by their patients� survival rates, only to find that they had created incentives for surgeons to turn away the sickest patients.


4. Students are not randomly assigned to teachers. While some control for school effects is possible, scholars are reluctant to place any weight on comparisons for teachers in different schools even within the same system. And even within a school, teachers may have varying numbers of students who are learning English or have learning disabilities or are homeless or who move multiple times, each of which is a factor that can affect learning.

5. Sample sizes are often too small. Even if the class makeup stays stable during the year, and all the students show up regularly, the N=30 of a large elementary class is too small a sample to provide a result that can allow strong inferences to be drawn. Often the makeup of the class changes during the year. If you exclude students who were not there all year, or whose absences exceed some designated level, the N decreases, providing a result of even less reliability.

6. Some argue that statewide data banks can address the question of student mobility. But if you derive results on a year or two years of data where the student has moved, how much of the improvement can properly be assigned to any one teacher? Even in elementary school, do we account for pull-out instruction, or possible tutoring (that could in some cases be counterproductive) as a possible influence on the test results upon which we base our analysis?

7. Even with value-added analysis, to date scholars have not been able to isolate the impact of outside learning experiences, home and school supports, and differences in student characteristics and starting points when trying to measure their growth.

8. A proper system of value-added assessment would have vertically scaled tests. Most states do not currently have such tests, for example, neither New York nor California does. That is, the tests in one grade are not necessarily congruent with those of the next along a continuum from year to year - we are not testing the same thing each year. As testing expert Dan Koretz of Harvard is quoted as noting,
"because of the need for vertically scaled tests, value-added systems may be even more incomplete than some status or cohort-to-cohort systems"
Here it is worth noting that cohort to cohort is comparing this year's fourth graders to last years, which is how Adequate Yearly Progress under No Child Left Behind has been calculated.

9. If measuring end of year to end of year, even if there are vertically scaled tests, there is still the well-documented issue of summer learning loss, which falls disproportionally upon those of lesser economic means, which also means it falls disproportionally upon those of color, who are more heavily represented at the lower end of the economic scale. IF we do not control for summer learning loss, our results are skewed. Allow me to quote a relevant portion of the study:
researchers have found that three-fourths of schools identified as being in the bottom 20% of all schools, based on the scores of students during the school year, would not be so identified if differences in learning outside of school were taken into account. Similar conclusions apply to the bottom 5% of all schools.
The authors also cite a study that shows "two-thirds of the difference between the ninth grade test scores of high and low socioeconomic status students can be traced to summer learning differences over the elementary years."

There is more, but this should give a real sense of how much there is in this paper, how thoroughly the authors examine relevant material to demonstrate that value-added assessment, the supposed magic bullet to allow us to tie student learning back to the effectiveness of teachers, cannot properly fulfill the task some wish to give to it.

The authors acknowledge that value-added approaches are superior to some of the alternatives methods of using test scores to evaluate teachers. These are

status test-score comparisons - compare average scores of students of one teacher to those of another

over change measures - compare the average test results of a single teacher from one year to the next - remember, these are different students

over growth measures - a comparison of the scores of the students of the teacher this year to the scores of those same students the previous year when they had different teachers.

Each of these approaches has serious problems with it. One can read the detailed explanation on p. 9. Value-added assessments may be an improvement, but
the claim that they can �level the playing field� and provide reliable, valid, and fair comparisons of individual teachers is overstated. Even when student demographic characteristics are taken into account, the value-added measures are too unstable (i.e., vary widely) across time, across the classes that teachers teach, and across tests that are used to evaluate instruction, to be used for the high-stakes purposes of evaluating teachers.



Let me offer a few of the quotes about value-added assessment that the authors of the brief offer from scholars who have examined the approach over the years, and then I will offer a few observations of my own.

in 2003, a research team at Rand concluded
The research base is currently insufficient to support the use of VAM for high-stakes decisions about individual teachers or schools.


In 2004, Donald Rubin opined
We do not think that their analyses are estimating causal quantities, except under extreme and unrealistic assumptions.


Henry Braun, then at ETS, offered this in 2005:
VAM results should not serve as the sole or principal basis for making consequential decisions about teachers. There are many pitfalls to making causal attributions of teacher effectiveness on the basis of the kinds of data available from typical school districts. We still lack sufficient understanding of how seriously the different technical problems threaten the validity of such interpretations.


Last year the Board on Testing and Assessment of the National Research Council of the National Academy of Sciences wrote to the Department of Education saying
...VAM estimates of teacher effectiveness should not be used to make operational decisions because such estimates are far too unstable to be considered fair or reliable.


Finally, this year, a report of a workshop run jointly by The National Research Council and the National Academy of Education offered this:
Value-added methods involve complex statistical models applied to test data of varying quality. Accordingly, there are many technical challenges to ascertaining the degree to which the output of these models provides the desired estimates. Despite a substantial amount of research over the last decade and a half, overcoming these challenges has proven to be very difficult, and many questions remain unanswered...


Let me repeat that last sentence, written this year: Despite a substantial amount of research over the last decade and a half, overcoming these challenges has proven to be very difficult, and many questions remain unanswered...

And yet this administration wants to move ahead with using student test scores, perhaps analyzed through value-added assessment methodologies, as a significant component of teacher evaluation. It is including this as part of the criteria to win Race to the Top Funds. In fairness, the Department does not specify using value-added (although anything else is far worse) nor does it specify what percentage of the evaluation is to depend upon the test scores - both of these decisions are still left to the states, some of which have left themselves wiggle room in their applications, using terms like "significant" to indicate the proportion of the evaluation that will depend upon student test scores.

The original Bush proposal for No Child Left Behind, as it went up on the White House website shortly after the inauguration of the 43rd president, proposed giving a 1% bonus of Title I money to schools that would give parents the value-added scores of the teachers of their students. That, fortunately, did not make it into the final legislation. Now we have the Los Angeles Times action, about which the Secretary of Education has offered a somewhat mixed and confusing response, even as he seems to support the idea of using such evaluations in assessing of teachers. Since the Times story broke we have seen some who write or advocate about education who have praised what the paper did, while others have condemned it. While mine might not be a major voice on education, I find myself very much in the latter camp.

One problem is that too many who write about education are close to ignorant about the limits of the information one can get from various kinds of assessment. We tend to what hard numbers as a society, we are obsessed with comparisons and rankings. In the process we often give far more credence to quantitative measures than they warrant.

I do not dispute that tests, including tests external to the school, have some utility. I also recognize that value-added assessment is beginning to offer some useful additional information. By itself that information is not sufficiently reliable that people's livelihoods should be either solely or heavily determined by the information they provide. They MAY indicate a teacher outside the norm - either well above or well below - but as the various studies you will encounter in this brief demonstrate, that is not necessarily the case, the results are not yet stable for individual teachers from year to year, we do not yet know how to properly control for non-instructional factors that can influence the scores upon which the analysis is based, nor can we properly distribute responsibility for student learning among the different adults who interact with a child at school.

I am a high school teacher. Let me offer a hypothetical - if I do more work in a social studies class on a particular kind of writing and that is what is assessed on the English exam, does the English teacher properly deserve the credit or blame for how students do on that part of the test? Those of us who teach in high school are aware that students often learn about our content either in other classes or from interactions outside of our classroom. Sometimes what they learn is correct and increases their performance in our class, sometimes it is incorrect and undercuts what we are instructing. To date, even value-added assessment is insufficient to control for such influences and allow proper inferences to be drawn about the actual impact of the teacher upon the learning of the students.

I have only explored a small portion of the material in the brief. You can download it without paying. If you are worried about whether you will be able to understand the contents, don't. You can start with the executive summary, in which you will find most of the key takeaways, written in language and presented in a style that is easily accessible. It is a bit less than four pages. The brief itself runs from pages 5-21, followed by three columns (over a page and a half) of footnotes, and 5 columns (over three and half pages) of sources. You can read through the brief without having to check the footnotes, or you can if you want glance at the back to see who is being cited if that is not clear in the text.

Let me clear. The authors are not opposed to value-added assessment. They are not even opposed to it being included in the process of teacher evaluation, although they offer some serious cautions that policy makers would be well advised to consider.

The title is accurate - there are still serious problems with using test scores to evaluate teachers. These problems are not solved by resorting to a value-added methodology.

We need to be careful not to denigrate nor discourage our teaching corps. We will not improve education if the end result of our efforts is to drive away the very teachers who most connect with students, who are able to inspire those students to persist when they are struggling, who are willing to take on the harder to teach. We have other methods of ascertaining whether teachers are in fact effective. We should not be abandoning them in favor of quantitative measures that cannot, as yet, fully carry the load.

The authors of this study have enough prestige that one can hope our media will give some attention to it. Those responsible for educational policy at local, state and national levels are not doing their jobs if they are unwilling to read and be sure they understand the implications of this brief.

That said, and adding that I will try to bring to the attention of as many policy makers as I can, I do not have high hopes that our wrongheaded headlong pursuit of quantitative measures of teacher effectiveness can even be slowed. I will add what voice I have to the efforts of these scholars. Perhaps after you read the brief, you will add yours?

Thanks.

Sunday, August 15, 2010

How to Avoid Scholarship Scams

How do you know what scholarships are legitimate and what scholarships are scams? If they sound too good to be true, they usually are; learn how to recognize and protect yourself from the most common scams. If a scholarship has an application fee or other required fees, it isn't worth your time and money to apply. Best bet � don�t pay for any scholarship information. Scholarship information is free and available to everyone.

Here�s one really easy way to check the scholarship or grant to see if it is legitimate:Google it; use a search engine of your choice, and look up the scholarship name or URL, plus the word scam. You might find out it is listed as a scam; or you might get a lot of results and discussion that question whether a scholarship is a scam or not - so it�s probably best to skip over applying for it and move on to the next scholarship opportunity.

The Federal Trade Commission in the US cautions students to look for these tell-tale lines:
  • "The scholarship is guaranteed or your money back."
  • "You can't get this information anywhere else."
  • "I just need your credit card or bank account number to hold this scholarship."
  • "We'll do all the work."
  • "The scholarship will cost some money."
  • "You've been selected by a 'national foundation' to receive a scholarship" or "You're a finalist" in a contest you never entered.

Other Tips for Avoiding Scholarship Scams:

  • Don't believe a promise of guaranteed scholarships. No one can guarantee that you will win a scholarship or grant.
  • You shouldn�t pay money to be matched with scholarships that suit you the best. Anyone can find out information about any scholarship by searching the Internet. Don�t pay anyone to do this for you.
  • Beware scholarship services that charge fees or claim that you can't get this information anywhere else. There are many free lists of scholarships available. Check with your school, library and trusted online scholarship sites before you decide to pay someone to do the work you can do yourself.
  • Don't pay an advance fee. Don't pay anyone who claims to be "holding" a scholarship for you or informs you the scholarship will cost some money. Free money shouldn't cost a thing. Ignore any news that you're a finalist in any contest that requires you to pay a fee for further consideration, or taxes on the winning scholarship.
  • Don�t pay to have someone apply for scholarships for you. This just does not work. In order to be eligible for scholarships, you have to submit your own applications and write your own essays. You can�t get around this, even by paying money. Scholarship committees can easily identify �canned� essays and letters.
  • Ignore the myth of unclaimed funds and the companies that advertise huge amounts of unclaimed money.
  • Don't be fooled by official-sounding names and logos. Make sure the foundation, organization or program is legitimate. Remember � just Google it!
  • If you feel as though the scholarship application and accompanying materials were never proofread, that�s a red flag. Multiple spelling and grammatical errors show a lack of professionalism that is essential to a scholarship foundation�s success.
  • If the only address you can find for a scholarship is a P.O. Box address, do not apply! This is definitely a scam as well. Also be wary of residential addresses as the company headquarters. If you can�t find a phone number for the scholarship sponsor, move on to another scholarship opportunity.
  • Do not give out your social security number, credit card, bank or checking account numbers to anyone claiming they need it for you to be eligible for access to "exclusive" scholarship information, or to deposit your winnings. Get information in writing first. It may be a set-up for an unauthorized withdrawal.

Nothing is more effective than your own dedicated work at finding and applying for scholarships and grants! Get the information you need to help you in this process.

Tuesday, August 10, 2010

Preparing for Scholarships: The All-Important Essay and Possible Interview

Social network profiles. Please know that your social or web profile WILL be investigated as part of the scholarship process; make sure your profile is professional, and a good representation of who you are! Facebook, Twitter, Blog, LinkedIn � all of your personal branding profiles should be updated and appropriate for your current scholarship, internship, or job activities. If you don�t have, or don�t know the specifics of �personal branding�, read the series of posts in the SIS blog listed in the archives on the right, or search for �personal branding�. Remember � you can also use the social networks to find scholarships as well!

The personal statement or essay. The application essay can be just as important as your GPA and extracurricular activities in helping you win a scholarship - probably the most important aspect of winning a merit scholarship. This is where your application needs to stand out! Create an essay outline, and have a �basic� essay written, using all your skills to create a portrait of yourself as a worthy recipient. Then read all information that comes with the scholarship application to determine the criteria for awarding the scholarships; emphasize these points in your essay. Make sure your essay fits the theme, and answers the question concisely. Use very specific examples from your life experience. Be specific, but show passion in your writing! (Word of warning � avoid the sob story; they rarely, if ever, win scholarships. Remember that every applicant has faced difficulties. What's different and individual to you is how you've overcome those obstacles. This is more significant and memorable than merely listing your misfortunes. Scholarship committees are not as interested in problems as they are in solutions.) The judges will be reading essay after essay on the same topic, so make your essay unique and engaging, with positive energy! Read, and reread your essay � refining, simplifying, and polishing. Show that you have thought deeply and broadly about what you have learned in your academic career and what you hope to learn next. Correctness and style are vital, and neatness counts. It�s important to adhere to the length requirements of the essay so you aren�t disqualified. Have someone read your essay, preferably someone with professional experience � a teacher, professor, writing tutors, or visit the college writing center if there is one available. Search the Web for articles on how to write scholarship essays. This is so very important � do your best work!

Interviews. Before you submit your applications, realize that you may need to be interviewed by the scholarship committee at some point in the process. There are academic scholarships or merit scholarships, especially those with high payouts that require a sit-down, face-to-face interview with the finalists in order to determine who is the most deserving of the award. Be prepared! Make sure if you get called in for an interview that you practice your scholarship interview skills and that you are comfortable with the topic of your essay. Review your application and keep a copy for yourself. That way, there are no big surprises when you go into the meeting room. If you need help with interviewing skills, visit the career services office at your university. Above all, be confident, be positive, and be yourself! (And don�t forget to smile!)

Wednesday, August 4, 2010

Textbook Alternatives

Textbooks. Just the word can conjure up pictures of dollar signs added to an already expensive tuition each semester. Now you have alternatives to purchasing new textbooks that can run in the triple digits, and cost over a thousand dollars a year! To check out the increased options and what works best for you, open a tabbed web browser like Internet Explorer, Mozilla Firefox, or Apple Safari. This will make it easier to keep track of and compare textbook options. You might also use a spreadsheet or a table in a word document to keep track of the list of books you need for each course, along with other needed information, such as the link to the store or website, price, tax, shipping & handling, and the time it takes for shipping the books. (Be aware of shipping & handling costs, as well as shipping time!). Before you start, make sure you know the name, edition, author, and ISBN number of the books you are searching for! It is now required to give students this information in the US. To find this information, check with the campus bookstore (online if they have that option); ask the professor personally or send an email; check syllabi or course websites.

Used Books. Certainly nothing new, but you might find new places to purchase them.

  • Students. Ask students who took the course last semester � quite often they ask the professor if the same book will be used again the following semester, and have one for sale. Also check to see if your campus has a student-run textbook selling system in place, such as a website or bulletin board.
  • Websites. Amazon.com or eBay is a good place to purchase used books. You can also use Google or Bing to search for books by typing in the name of the book and edition, and see what other options come up. (Make sure you don�t purchase the �international� edition!
  • Book companies and vendors. There are companies that buy and sell used textbooks, and sometimes you�ll find good deals there. A few to try: Abebooks.com; CampusBooks,com; Textbooks.com; Half.com; CheapestTextbooks.com; � do a search and you�ll find many more. Keep in mind that book companies also buy back used textbooks, often paying for shipping, at prices better than the local campus bookstore can offer.
  • Another way to save money on textbooks is to use a price comparison service such as http://www.bookdealfinder.com/
  • If your books just aren�t worth selling back, consider donating them to organizations such as Books for Africa. Better yet, hold a book drive on your campus to send used books and funds to people who need them (Books for Africa, Room to Read, Worldfund, Invisible Children, and more) � check out Better World Books; TextbookRecycling.com; and other book companies for more information. Not only will the textbooks go to a good cause, they will not end up in a landfill. AND � you can earn money for your campus organization! (Win-Win!!)

E-Books. Again, this is not exactly new, but there are new players and better options. Many of the book publishers have online versions of the textbooks they sell � at a reduced price; so check out the book publisher�s website for details. There are a number of sites that offer the classic texts, novels, and books free:

do a search for �free eBooks� and you will find others as well.

A number of vendors have great new devices available called e-readers. They are small, slim (sometimes 3G wireless) reading devices that let you download ebooks in 60 seconds! No monthly fees, no service plans, no hunting for Wi-Fi hotspots. (I have a Kindle, and that means reading my web email, posting to Twitter, catching up on Facebook, and surfing the Web in the car!! And, of course, reading books!) Check them out.

If you�ve never read an eBook � download a free one today and check out the tools available. You can highlight, take clippings, bookmark, and (what I like best) SEARCH! When you study, wouldn�t it be great to search your textbook like you search websites for specific terms?

Be sure to check out the eBook vendors carefully � some eBooks are only available for a specific period of time; some are only accessible on the computer you use to download the book; as well as other limitations.

Renting/Lending Textbooks. Now this is an old idea with a new twist! There are now a large number of universities, book companies, and publishing companies that have textbook rental options. The prices are usually much cheaper than the new retail price of the book. Check with your university bookstore to see if they are planning a rental option for students; some are working directly with book publishers and vendors. If not, there are websites that have online textbook rental options for students, although shipping & handling, along with shipping times, might make this a less desirable option than renting through a university program. There is a company, called Cengage Learning that makes the first couple of chapters of the rented text available online to students, so last-minute ordering isn�t such a problem. They also announced that they would start renting books to students this year, at 40 percent to 70 percent of the sale price. They also give you the option of renting selected chapters of books!

There are now a number of Internet textbook-rental companies, here�s two to get you started looking for the best deals: BookRenter.com, and Chegg (chegg.com) that billed itself as �the Netflix for college textbooks.� Both advertise books at 65 � 85% off the regular price of textbooks. This is another option definitely worth looking into!

But don�t delay � order your textbooks as soon as possible so you have them in time for classes! (Ok � that�s the professor in me speaking!) Good luck with your textbook search, and good luck in your courses this semester!

Monday, August 2, 2010

Pacific University, Oregon, offers Scholarships to International Students - Up to $9,000 Per Year

Pacific University is one of the best traditional universities in America. according to US News & World Report. Pacific University is in the top 13% of the 1400 American universities included in this category. Pacific can give you "Conditional Admission" if you meet the academic requirements of the university, but don't have enough English yet. They offer small classes with over 50 areas of study and several powerful ESL programs. Average university academic class size is 19 and average ESL class size is 12.

Pacific is a private university offering large scholarships for international students. After scholarships, the tuition cost is about the same as a public university. This means you can get a high-quality private university education at about the same cost of a public university. Their Career Center can help you find internshiips before graduation and full-time paying jobs after graduation (limited to 1 year by US immigration). You can apply for part-time jobs on campus. This will help you save a little money. It will also help you improve your English and meet people. (They cannot guarantee you a job, but they can help you apply.)

You can begin taking classes for university credit, together with ESL classes, with only iBT 53 (475 TOEFL or 5.0 IELTS). This is through our ELI Transition program. Pacific has a very strong ESL program, started in 1982, to help you reach our TOEFL requirement quickly.

There is a strong feeling of community at Pacific. They have many programs to help you get connected to other students, get connected to your professors and get connected to the community.

*Undergraduate Scholarships*
Pacific offers automatic scholarships to all international undergraduate students based on your high school or college scores at the time of admission.

The scholarship ranges from US$ 6,000 to US$ 9,000 per year.

Your scholarship award is per year for four years. You do not need to apply separately for these scholarships. They are automatically awarded to all admitted students. They will show the scholarship amount on your acceptance letter and I-20. While studying at Pacific, you must have a Pacific GPA of at least 2.0 (average score) to keep your scholarship. These scholarships are only for undergraduate students (both Freshman and Transfer Students). They do not apply to graduate level students.

Contact Pacific at intladmissions@pacificu.edu.

John Harn
International Admissions
Visit their International Admission Website
http://www.pacificu.edu/admissions/undergrad/international/