Wed Jun 18 09:15:57 PDT 2008 T0460 Make started Wed Jun 18 09:17:14 PDT 2008 Running on peep.cse.ucsc.edu Wed Jun 18 20:11:57 PDT 2008 Kevin Karplus ORFan, new-fold or very remote homology. (best hit E-value 31) The try1-opt3 model is very gappy---I need to increase breaks to close things up. Wed Jun 18 20:22:42 PDT 2008 Kevin Karplus try2 started from alignments, but with costfcn that penalizes gaps and clashes more. Wed Jun 25 09:02:06 PDT 2008 Kevin Karplus MQAU quality assessment favores SAM-T08-server, Zhang-Server, and BAKER-ROBETTA MQAC quality assessment favores SAM-T08-server, Zhang-Server, METATASSER, and pro-sp3-TASSER metaserver runs with try1 costfcn have been started. Sat Jun 28 14:40:53 PDT 2008 SAM-T08-MQAO hand QA T0460 Submitted Sat Jun 28 14:40:53 PDT 2008 SAM-T08-MQAU hand QA T0460 Submitted Sat Jun 28 14:40:53 PDT 2008 SAM-T08-MQAC hand QA T0460 Submitted Sat Jul 12 12:17:22 PDT 2008 Kevin Karplus MQAC1 and MQAU1 runs both chose SAM-T08-server_TS1. Perhaps I should do MQAX1 and MQAX2 runs that exclude the SAM servers. Sat Jul 12 12:27:49 PDT 2008 Kevin Karplus MQAX1 and MQAX2 started. Actually, there are some things I like about the MQAC1 and MQAU1 predictions (from SAM-T08-server_TS1). There are several sheets that don't quite form, but would be quite nice if they did. Perhaps adding some sheet constraints would fix things up. Sat Jul 12 13:03:57 PDT 2008 Kevin Karplus I tried making up some sheet constraints for try3.costfcn, but I did not come up with a consistent set. I'll try using it to optimize the MQAC1 and MQAU1 models anyways. Sat Jul 12 15:26:25 PDT 2008 Kevin Karplus Interestingly, the best scorers with the try3 costfcn was not try3-opt3, but MQAX2-opt3 and MQAX1-opt3, both from optimizing BAKER-ROBETTA_TS5. Those models look pretty good, but quite different from the other models I've been looking at. Perhaps I should do a run from alignments with the MQAX2 constraints. Sat Jul 12 18:17:25 PDT 2008 Kevin Karplus MQAX2 and MQAX1 score best with try4.costfcn, and try4 doesn't even do as well as try2 and try3. try4 has some difficult gaps to fill, but the sheet looks fairly plausible. I'll try polishing it in try5. For try6, I'll try polishing try2/try4 using the same costfcn as for try5. Sat Jul 12 20:10:25 PDT 2008 Kevin Karplus try5 gets a bit better than try6, though both have bad n_ca_c values, which gromacs makes even worse. They have bad breaks also. Sat Jul 12 20:22:00 PDT 2008 Kevin Karplus I currently have 4 different lineages of models that don't have much in common: MQAX2-opt3 from BAKER-ROBETTA_TS5 try3-opt3 from SAM-T08-server_TS1 (and try1) try5-opt3 < try4-opt3 < 2ictA? try2-opt3 < 2j1dG? Rosetta, naturally, likes MQAX2-opt3.gromacs0.repacck-nonPC best. I'm not convinced by any of them, so I should probably submit all 4. Question: do I do more gap closing on the ones with big breaks? Wed Jul 16 13:26:20 PDT 2008 Kevin Karplus I'll do one more metaserver run, with high beta_pair score and high neural-net property scores, excluding SAM servers (and BAKER-ROBETTA_TS5), and see what is picked up. Since I have no clue what this protein should look like, I'll submit 5 junky models. Wed Jul 16 17:30:08 PDT 2008 Kevin Karplus I don't see anything reasonable in any of my models or ther metaserver models. The two optimized from the BAKER-ROBETTA models are as good as I'm likely to get. I'll submit ReadConformPDB T0460.MQAX7-opt3.gromacs0.repack-nonPC.pdb # < BAKER-ROBETTA_TS4 #best rosetta energy ReadConformPDB T0460.MQAX2-opt3.pdb # < BAKER-ROBETTA_TS5 ReadConformPDB T0460.try3-opt3.pdb # < MQAC1-opt3 < SAM-T08-server_TS1 ReadConformPDB T0460.try5-opt3.pdb # < try4-opt3 < align(2j1dG) ReadConformPDB T0460.try2-opt3.pdb # < align(2j1dG) The MQAC quality assessment thought that none of the servers got up to 30% GDT, and I'm inclided to agree.