jump to navigation

Rapid-response (non-linear) teaching: report January 25, 2018

Posted by apetrov in Blogroll, Education, Near Physics, Physics, Science.
Tags: ,
add a comment

Some of you might remember my previous post about non-linear teaching, where I described a new teaching strategy that I came up with and was about to implement in teaching my undergraduate Classical Mechanics I class. Here I want to report on the outcomes of this experiment and share some of my impressions on teaching.

Course description

Our Classical Mechanics class is a gateway class for our physics majors. It is the first class they take after they are done with general physics lectures. So the students are already familiar with the (simpler version of the) material they are going to be taught. The goal of this class is to start molding physicists out of physics students. It is a rather small class (max allowed enrollment is 20 students; I had 22 in my class), which makes professor-student interaction rather easy.

Rapid-response (non-linear) teaching: generalities

To motivate the method that I proposed, I looked at some studies in experimental psychology, in particular in memory and learning studies. What I was curious about is how much is currently known about the process of learning and what suggestions I can take from the psychologists who know something about the way our brain works in retaining the knowledge we receive.

As it turns out, there are some studies on this subject (I have references, if you are interested). The earliest ones go back to 1880’s when German psychologist Hermann Ebbinghaus hypothesized the way our brain retains information over time. The “forgetting curve” that he introduced gives approximate representation of information retention as a function of time. His studies have been replicated with similar conclusions in recent experiments.

EbbinghausCurveThe upshot of these studies is that loss of learned information is pretty much exponential; as can be seen from the figure on the left, in about a day we only retain about 40% of what we learned.

Psychologists also learned that one of the ways to overcome the loss of information is to (meaningfully) retrieve it: this is how learning  happens. Retrieval is critical for robust, durable, and long-term learning. It appears that every time we retrieve learned information, it becomes more accessible in the future. It is, however, important how we retrieve that stored information: simple re-reading of notes or looking through the examples will not be as effective as re-working the lecture material. It is also important how often we retrieve the stored info.

So, here is what I decided to change in the way I teach my class in light of the above-mentioned information (no pun intended).

Rapid-response (non-linear) teaching: details

To counter the single-day information loss, I changed the way homework is assigned: instead of assigning homework sets with 3-4-5 problems per week, I introduced two types of homework assignments: short homeworks and projects.

Short homework assignments are single-problem assignments given after each class that must be done by the next class. They are designed such that a student needs to re-derive material that was discussed previously in class (with small new twist added). For example, if the block-down-to-incline problem was discussed in class, the short assignment asks to redo the problem with a different choice of coordinate axes. This way, instead of doing an assignment in the last minute at the end of the week, the students are forced to work out what they just learned in class every day (meaningful retrieval)!

The second type of assignments, project homework assignments are designed to develop understanding of how topics in a given chapter relate to each other. There are as many project assignments as there are chapters. Students get two weeks to complete them.

At the end, the students get to solve approximately the same number of problems over the course of the semester.

For a professor, the introduction of short homework assignments changes the way class material is presented. Depending on how students performed on the previous short homework, I adjusted the material (both speed and volume) that we discussed in class. I also designed examples for the future sections in such a way that I could repeat parts of the topic that posed some difficulties in comprehension. Overall, instead of a usual “linear” propagation of the course, we moved along something akin to helical motion, returning and spending more time on topics that students found more difficult (hence “rapid-response or non-linear” teaching).

Other things were easy to introduce: for instance, using Socrates’ method in doing examples. The lecture itself was an open discussion between the prof and students.

Outcomes

So, I have implemented this method in teaching Classical Mechanics I class in Fall 2017 semester. It was not an easy exercise, mostly because it was the first time I was teaching GraphNonlinearTeachingthis class and had no grader help. I would say the results confirmed my expectations: introduction of short homework assignments helps students to perform better on the exams. Now, my statistics is still limited: I only had 20 students in my class. Yet, among students there were several who decided to either largely ignore short homework assignments or did them irregularly. They were given zero points for each missed short assignment. All students generally did well on their project assignments, yet there appears some correlation (see graph above) between the total number of points acquired on short homework assignments and exam performance (measured by a total score on the Final and two midterms). This makes me thing that short assignments were beneficial for students. I plan to teach this course again next year, which will increase my statistics.

I was quite surprised that my students generally liked this way of teaching. In fact, they were disappointed that I decided not to apply this method for the Mechanics II class that I am teaching this semester. They also found that problems assigned in projects were considerably harder than the problems from the short assignments (this is how it was supposed to be).

For me, this was not an easy semester. I had to develop my set of lectures — so big thanks go to my colleagues Joern Putschke and Rob Harr who made their notes available. I spent a lot of time preparing this course, which, I think, affected my research outcome last semester. Yet, most difficulties are mainly Wayne State-specifics: Wayne State does not provide TAs for small classes, so I had to not only design all homework assignments, but also grade them (on top of developing the lectures from the ground up). During the semester, it was important to grade short assignments in the same day I received them to re-tune lectures, this did take a lot of my time. I would say TAs would certainly help to run this course — so I’ll be applying for some internal WSU educational grants to continue development of this method. I plan to employ it again next year to teach Classical Mechanics.

Advertisements

Non-linear teaching October 9, 2017

Posted by apetrov in Blogroll, Physics, Science.
3 comments

I wanted to share some ideas about a teaching method I am trying to develop and implement this semester. Please let me know if you’ve heard of someone doing something similar.

This semester I am teaching our undergraduate mechanics class. This is the first time I am teaching it, so I started looking into a possibility to shake things up and maybe apply some new method of teaching. And there are plenty offered: flipped classroom, peer instruction, Just-in-Time teaching, etc.  They all look to “move away from the inefficient old model” where there the professor is lecturing and students are taking notes. I have things to say about that, but not in this post. It suffices to say that most of those approaches are essentially trying to make students work (both with the lecturer and their peers) in class and outside of it. At the same time those methods attempt to “compartmentalize teaching” i.e. make large classes “smaller” by bringing up each individual student’s contribution to class activities (by using “clickers”, small discussion groups, etc). For several reasons those approaches did not fit my goal this semester.

Our Classical Mechanics class is a gateway class for our physics majors. It is the first class they take after they are done with general physics lectures. So the students are already familiar with the (simpler version of the) material they are going to be taught. The goal of this class is to start molding physicists out of students: they learn to simplify problems so physics methods can be properly applied (that’s how “a Ford Mustang improperly parked at the top of the icy hill slides down…” turns into “a block slides down the incline…”), learn to always derive the final formula before plugging in numbers, look at the asymptotics of their solutions as a way to see if the solution makes sense, and many other wonderful things.

So with all that I started doing something I’d like to call non-linear teaching. The gist of it is as follows. I give a lecture (and don’t get me wrong, I do make my students talk and work: I ask questions, we do “duels” (students argue different sides of a question), etc — all of that can be done efficiently in a class of 20 students). But instead of one homework with 3-4 problems per week I have two types of homework assignments for them: short homeworks and projects.

Short homework assignments are single-problem assignments given after each class that must be done by the next class. They are designed such that a student need to re-derive material that we discussed previously in class with small new twist added. For example, in the block-down-to-incline problem discussed in class I ask them to choose coordinate axes in a different way and prove that the result is independent of the choice of the coordinate system. Or ask them to find at which angle one should throw a stone to get the maximal possible range (including air resistance), etc.  This way, instead of doing an assignment in the last minute at the end of the week, students have to work out what they just learned in class every day! More importantly, I get to change how I teach. Depending on how they did on the previous short homework, I adjust the material (both speed and volume) discussed in class. I also  design examples for the future sections in such a way that I can repeat parts of the topic that was hard for the students previously. Hence, instead of a linear propagation of the course, we are moving along something akin to helical motion, returning and spending more time on topics that students find more difficult. That’t why my teaching is “non-linear”.

Project homework assignments are designed to develop understanding of how topics in a given chapter relate to each other. There are as many project assignments as there are chapters. Students get two weeks to complete them.

Overall, students solve exactly the same number of problems they would in a normal lecture class. Yet, those problems are scheduled in a different way. In my way, students are forced to learn by constantly re-working what was just discussed in a lecture. And for me, I can quickly react (by adjusting lecture material and speed) using constant feedback I get from students in the form of short homeworks. Win-win!

I will do benchmarking at the end of the class by comparing my class performance to aggregate data from previous years. I’ll report on it later. But for now I would be interested to hear your comments!

 

30 years of Chernobyl disaster April 26, 2016

Posted by apetrov in Uncategorized.
2 comments

30 years ago, on 26 April 1986, the biggest nuclear accident happened at the Chernobyl nuclear power station.

Class1986

The picture above is of my 8th grade class (I am in the front row) on a trip from Leningrad to Kiev. We wanted to make sure that we’d spend May 1st (Labor Day in the Soviet Union) in Kiev! We took that picture in Gomel, which is about 80 miles away from Chernobyl, where our train made a regular stop. We were instructed to bury some pieces of clothing and shoes after coming back to Leningrad due to excess of radioactive dust on them…

 

“Ladies and gentlemen, we have detected gravitational waves.” February 11, 2016

Posted by apetrov in Uncategorized.
4 comments

The title says it all. Today, The Light Interferometer Gravitational-Wave Observatory  (or simply LIGO) collaboration announced the detection of gravitational waves coming from the merger of two black holes located somewhere in the Southern sky, in the direction of the Magellanic  Clouds.  In the presentation, organized by the National Science Foundation, David Reitze (Caltech), Gabriela Gonzales (Louisiana State), Rainer Weiss (MIT), and Kip Thorn (Caltech), announced to the room full of reporters — and thousand of scientists worldwide via the video feeds — that they have seen a gravitational wave event. Their paper, along with a nice explanation of the result, can be seen here.

LIGO

The data that they have is rather remarkable. The event, which occurred on 14 September 2015, has been seen by two sites (Livingston and Hanford) of the experiment, as can be seen in the picture taken from their presentation. It likely happened over a billion years ago (1.3B light years away) and is consistent with the merger of two black holes, of 29 and 46 solar masses. The resulting larger black hole’s mass is about 62 solar masses, which means that about 3 solar masses of energy (29+36-62=3) has been radiated in the form of gravitational waves. This is a huge amount of energy! The shape of the signal is exactly what one should expect from the merging of two black holes, with 5.1 sigma significance.

It is interesting to note that the information presented today totally confirms the rumors that have been floating around for a couple of months. Physicists like to spread rumors, as it seems.

ligoSince the gravitational waves are quadrupole, the most straightforward way to see the gravitational waves is to measure the relative stretches of the its two arms (see another picture from the MIT LIGO site) that are perpendicular to each other. Gravitational wave from black holes falling onto each other and then merging. The LIGO device is a marble of engineering — one needs to detect a signal that is very small — approximately of the size of the nucleus on the length scale of the experiment. This is done with the help of interferometry, where the laser beams bounce through the arms of the experiment and then are compared to each other. The small change of phase of the beams can be related to the change of the relative distance traveled by each beam. This difference is induced by the passing gravitational wave, which contracts one of the arms and extends the other. The way noise that can mimic gravitational wave signal is eliminated should be a subject of another blog post.

This is really a remarkable result, even though it was widely expected since the (indirect) discovery of Hulse and Taylor of binary pulsar in 1974! It seems that now we have another way to study the Universe.

Nobel Prize in Physics 2015 October 6, 2015

Posted by apetrov in Uncategorized.
add a comment

So, the Nobel Prize in Physics 2015 has been announced. To much surprise of many (including the author), it was awarded jointly to Takaaki Kajita and Arthur B. McDonald “for the discovery of neutrino oscillations, which shows that neutrinos have mass.” Well deserved Nobel Prize for a fantastic discovery.

What is this Nobel prize all about? Some years ago (circa 1997) there were a couple of “deficit” problems in physics. First, it appeared that the detected number of (electron) neutrinos coming form the Sun was measured to be less than expected. This could be explained in a number of ways. First, neutrino could oscillate — that is, neutrinos produced as electron neutrinos in nuclear reactions in the Sun could turn into muon or tau neutrinos and thus not be detected by existing experiments, which were sensitive to electron neutrinos. This was the most exciting possibility that ultimately turned out to be correct! But it was by far not the only one! For example, one could say that the Standard Solar Model (SSM) predicted the fluxes wrong — after all, the flux of solar neutrinos is proportional to core temperature to a very high power (~T25 for 8B neutrinos, for example). So it is reasonable to say that neutrino flux is not so well known because the temperature is not well measured (this might be disputed by solar physicists). Or something more exotic could happen — like the fact that neutrinos could have large magnetic moment and thus change its helicity while propagating in the Sun to turn into a right-handed neutrino that is sterile.

The solution to this is rather ingenious — measure neutrino flux in two ways — sensitive to neutrino flavor (using “charged current (CC) interactions”) and insensitive to neutrino flavor (using “neutral current (NC) interactions”)! Choosing heavy water — which contains deuterium — is then ideal for this detection. This is exactly what SNO collaboration, led by A. McDonald did

Screen Shot 2015-10-06 at 2.51.27 PM

As it turned out, the NC flux was exactly what SSM predicted, while the CC flux was smaller. Hence the conclusion that electron neutrinos would oscillate into other types of neutrinos!

Another “deficit problem” was associated with the ratio of “atmospheric” muon and electron neutrinos. Cosmic rays hit Earth’s atmosphere and create pions that subsequently decay into muons and muon neutrinos. Muons would also eventually decay, mainly into an electron, muon (anti)neutrino and an electron neutrino, as

Screen Shot 2015-10-06 at 2.57.37 PM

As can be seen from the above figure, one would expect to have 2 muon-flavored neutrinos per one electron-flavored one.

This is not what Super K experiment (T. Kajita) saw — the ratio really changed with angle — that is, the ratio of neutrino fluxes from above would differ substantially from the ratio from below (this would describe neutrinos that went through the Earth and then got into the detector). The solution was again neutrino oscillations – this time, muon neutrinos oscillated into the tau ones.

The presence of neutrino oscillations imply that they have (tiny) masses — something that is not predicted by minimal Standard Model. So one can say that this is the first indication of physics beyond the Standard Model. And this is very exciting.

I think it is interesting to note that this Nobel prize might help the situation with funding of US particle physics research (if anything can help…). It shows that physics has not ended with the discovery of the Higgs boson — and Fermilab might be on the right track to uncover other secrets of the Universe.

Nobel week 2015 October 5, 2015

Posted by apetrov in Blogroll, Physics, Science.
Tags: ,
1 comment so far

So, once again, the Nobel week is upon us. And one of the topics of conversations for the “water cooler chat” in physics departments around the world is speculations on who (besides the infamous Hungarian “physicist” — sorry for the insider joke, I can elaborate on that if asked) would get the Nobel Prize in physics this year. What is your prediction?

With invention of various metrics for “measuring scientific performance” one can make educated guesses — and even put predictions on the industrial footage — see Thomson Reuters predictions based on a number of citations (they did get the Englert-Higgs prize right, but are almost always off). Or even try your luck with on-line betting (sorry, no link here — I don’t encourage this). So there is a variety of ways to make you interested.

My predictions for 2015: Vera Rubin for Dark Matter or Deborah Jin for fermionic condensates. But you must remember that my record is no better than that of Thomson Reuters.

Harvard University is to change its name April 1, 2015

Posted by apetrov in Uncategorized.
add a comment

A phrase from William Shakespeare’s Romeo and Juliet states: “What’s in a name? That which we call a rose By any other name would smell as sweet.” This cannot be any further from the truth in the corporate world. The name of a corporation is its face, so setting a brand requires a lot of work and money. But what happens when something goes wrong?  The way to deal with corporate problems often involves re-branding, changing the name and the face of the corporation.  It works as customers usually do not check the history of a company before buying its products or using its services. It simply works.

With the Universities today run according to the corporate model, it was only a matter of time until re-branding came to the academic world. And leading Universities, like Harvard, seem to be embracing the model. Since 2013 article in Harvard Crimson, big Universities became a focus of investigations of many leading newspapers and politicians. Harvard, in particular, has been a focus of a brewing controversy. The University with the largest endowment of any university in the world, has got its name associated with the person who was not, in fact, the founder of Harvard University. As reported, in the very recent internal investigation by Harvard Crimson, John Harvard cannot be the founder of the school, because the Massachusetts Colony’s vote had come two years prior to Harvard’s bequest (compare this to Ezra Cornell’s founding of Cornell University). This led several prominent Massachusetts politicians to suggest that the University will be returned to the ownership by the Commonwealth with its name changed to University of Massachusetts, Cambridge. “We have a fantastic University system here in Massachusetts, with the flagship campus in Amherst,” said one of the prominent politicians who preferred not to be named, “Any University in the World would be proud to be a part of it.”

Returning a prominent private University to the ownership by the State is highly unusual nowadays and is probably highly specific to New England. With tightening budgets many states seek to privatize the Universities to remove them from their budget. For instance, there is a talk that a large public Midwestern school, Wayne State University, will soon change its owners and its name. Two prominent figures, W. Rooney and W. Gretzky, are rumored to work on acquiring the University and re-branding it as simply Wayne’s University. And the changes are rumored go even further. An external company Haleburton has already completed an assessment of the University’s strengths. The company noted WSU’s worldwide reputation in chemistry, physics and medicine and its Carnegie I research status, and recommended that the school should concentrate its efforts on graduating hockey, football, basketball and baseball players. “We are preparing our graduates to have highly successful careers. What job in the United States brings more money than the NFL or NHL player?” a member of WSU’s Academic Senate has been quoted in saying. “We are all excited about the change and looking forward to what else future would bring us.”

So, you want to go on sabbatical… February 5, 2015

Posted by apetrov in Blogroll, Near Physics, Physics, Science.
add a comment

Every seven years or so a professor in a US/Canadian University can apply for a sabbatical leave. It’s a very nice thing: your University allows you to catch up on your research, learn new techniques, write a book, etc. That is to say, you become a postdoc again. And in many cases questions arise: should I stay at my University or go somewhere else? In many cases yearlong sabbaticals are not funded by the home University, i.e. you have to find additional sources of funding to keep your salary.

I am on a year-long sabbatical this academic year. So I had to find a way to fund my sabbatical (my University only pays 60% of my salary). I spent Fall 2014 semester at Fermilab and am spending Winter 2015 semester at the University of Michigan, Ann Arbor.

Here are some helpful resources for those who are looking to fund their sabbatical next year. As you could see from the list, they are slightly tilted towards theoretical physics. Yet, there are many resources that are useful for any profession. Of course your success depends on many factors: whether or not you would like to stay in the US or go abroad, competition, etc.

  • General resources:

Guggenheim Foundation
Deadline: September

Fulbright Scholar Program
Deadline: August

  • USA/Canada:

Simons Fellowship
Deadline: September

IAS Princeton (Member/Sabbatical)
Deadline: November

Perimeter Institute:
Visitors
Visiting Professors
Deadline: November

Radcliffe Institute at Harvard University
Deadline: November

FNAL:
URA Visiting Scholar program
Intensity Frontier Fellowships
Deadline: twice a year

  • Europe:

Alexander von Humbuldt:
Friedrich Wilhelm Bessel Research Award
Humboldt Research Award
Deadline: varies

Marie Curie International Incoming Fellowships
Deadline: varies

CERN Short Term visitors
Deadline: varies

Hans Fischer Senior Fellowship (TUM-IAS, Munchen)
Deadline: varies
Some general  info that could also be useful can be found here.

I don’t pretend to have a complete list, but those sites were useful for me. I did not apply to all of those programs — and rather unfortunately, missed a deadline for the Simons Fellowship. Many University also have separate funds for sabbatical visitors. So if there is a University one wants to visit, it’s best to ask.

On a final note, it might be useful to be prepared and figure out, if you get funded, how the money/fellowship will find a way to your University and to you. Also, in many cases “60% of the salary” paid by your University while you are on a sabbatical leave means that you would have to find not only the remaining 40% of your salary, but also fringes that your University would take from your fellowship. So the amount that you’d need to find is more than 40% of your salary. Please don’t make a mistake that I made. 🙂

Good luck!

Data recall at the LHC? April 1, 2014

Posted by apetrov in Uncategorized.
Tags: , ,
1 comment so far

In a stunning turn of events, Large Hadron Collider (LHC) management announced a recall and review of thousands of results that came from its four main detectors, ATLAS, CMS, LHCb and ALICE, in the course of the past several years when it learned that the ignition switches used to start the LHC accelerator (see the image) might have been produced by GM. Image

GM’s CEO, A. Ibarra, who is better known in the scientific world for the famous Davidson-Ibarra bound in leptogenesis, will be testifying on the Capitol Hill today. This new revelation will definitely add new questions to already long list of queries to be addressed by the embattled CEO. In particular, the infamous LHC disaster that happened on 10 September 2008, which cost taxpayers over 21Million dollars to fix, and has long suspected been caused by a magnet quench, might have been caused by too much paper accidentally placed on a switch by a graduate student, who was on duty that day.

“We want to know why it took LHC management five years to issue that recall”, an unidentified US Government official said in the interview, “We want to know what is being done to correct the problem. From our side, we do everything humanly possible to accommodate US high energy particle physics researchers and help them to avoid such problems in the future.  For example, we included a 6.6% cut in US HEP funding in the President’s 2015 budget request.” He added, “We suspected that something might be going on at the LHC after it was convincingly proven to us at our weekly seminar that the detected Higgs boson is ‘simply one Xenon atom of the 1 trillion 167 billion 20 million Xenon atoms which there are in the LHC!’

This is not the first time accelerators cause physicists to rethink their results and designs. For example, last year Japanese scientists had to overcome the problem of unintended acceleration of positrons at their flagship facility KEK.

At this point, it is not clear how GM’s ignition switches problems would affect funding of operations at the National Ignition Facility in Livermore, CA.

 

And the 2013 Nobel Prize in Physics goes to… October 8, 2013

Posted by apetrov in Particle Physics, Physics, Science, Uncategorized.
1 comment so far

Today the 2013 Nobel Prize in Physics was awarded to François Englert (Université Libre de Bruxelles, Belgium) and Peter W. Higgs (University of Edinburgh, UK). The official citation is “for the theoretical discovery of a mechanism that contributes to our understanding of the origin of mass of subatomic particles, and which recently was confirmed through the discovery of the predicted fundamental particle, by the ATLAS and CMS experiments at CERN’s Large Hadron Collider.” What did they do almost 50 years ago that warranted their Nobel Prize today? Let’s see (for the simple analogy see my previous post from yesterday).

The overriding principle of building a theory of elementary particle interactions is symmetry. A theory must be invariant under a set of space-time symmetries (such as rotations, boosts), as well as under a set of “internal” symmetries, the ones that are specified by the model builder. This set of symmetries restrict how particles interact and also puts constraints on the properties of those particles. In particular, the symmetries of the Standard Model of particle physics require that W and Z bosons (particles that mediate weak interactions) must be massless. Since we know they must be massive, a new mechanism that generates those masses (i.e. breaks the symmetry) must be put in place. Note that a theory with massive W’s and Z that are “put in theory by hand” is not consistent (renormalizable).

The appropriate mechanism was known in the beginning of the 1960’s. It goes under the name of spontaneous symmetry breaking. In one variant it involves a spin-zero field whose self-interactions are governed by a “Mexican hat”-shaped potential

MexicanHatIt is postulated that the theory ends up in vacuum state that “breaks” the original symmetries of the model (like the valley in the picture above). One problem with this idea was that a theorem by G. Goldstone required a presence of a massless spin-zero particle, which was not experimentally observed. It was Robert Brout, François Englert, Peter Higgs, and somewhat later (but independently), by Gerry Guralnik, C. R. Hagen, Tom Kibble who showed a loophole in a version of Goldstone theorem when it is applied to relativistic gauge theories. In the proposed mechanism massless spin-zero particle does not show up, but gets “eaten” by the massless vector bosons giving them a mass. Precisely as needed for the electroweak bosons W and Z to get their masses!  A massive particle, the Higgs boson, is a consequence of this (BEH or Englert-Brout-Higgs-Guralnik-Hagen-Kibble) mechanism and represents excitation of the Higgs field about its new vacuum state.

It took about 50 years to experimentally confirm the idea by finding the Higgs boson! Tracking the historic timeline, the first paper by Englert and Brout, was sent to Physical Review Letter on 26 June 1964 and published in the issue dated 31 August 1964. Higgs’ paper, received by Physical Review Letters on 31 August 1964 (on the same day Englert and Brout’s paper was published)  and published in the issue dated 19 October 1964. What is interesting is that the original version of the paper by Higgs, submitted to the journal Physics Letters, was rejected (on the grounds that it did not warrant rapid publication). Higgs revised the paper and resubmitted it to Physical Review Letters, where it was published after another revision in which he actually pointed out the possibility of the spin-zero particle — the one that now carries his name. CERN’s announcement of Higgs boson discovery came 4 July 2012.

Is this the last Nobel Prize for particle physics? I think not. There are still many unanswered questions — and the answers would warrant Nobel Prizes. Theory of strong interactions (which ARE responsible for masses of all luminous matter in the Universe) is not yet solved analytically, the nature of dark matter is not known, the picture of how the Universe came to have baryon asymmetry is not cleared. Is there new physics beyond what we already know? And if yes, what is it? These are very interesting questions that need answers.