Emerging Trends in Semantic Technologies Class Spring 2010

Schedule

  • Week 1: January 26, 2010
  • Week 2: February 2, 2010
  • Week 3: February 9, 2010
  • February 16, 2010: group meeting (no formal class)
  • Week 4: February 23, 2010
  • Week 5: March 2, 2010
  • March 9, 2010: no class, spring break
  • Week 6: March 16, 2010
  • Week 7: March 23, 2010
  • Week 8: March 30, 2010
  • Week 9: April 6, 2010
  • Week 10: April 13, 2010
  • Week 11: April 20, 2010
  • Week 12: April 27, 2010
  • Week 13: May 4, 2010
  • Week 14: May 11, 2010

 

 

Weekly detail

Week 1 - January 26, 2010

  • Introduction to course

 

File:Csci6965-spring-2010-week1.ppt

Week 2- February 2, 2010

CSCI 6965 2/2/10 notes

  • Student 10 minute introduction - interests and experience

Instructions for written material

  • 1/2 - 3/4 page
  • should contain a clear statement of a research interest/ topic
  • should provide a few sentences on why this research is important, or its application
  • 3-5 keywords or phrases classifying the topic and if an index set or controlled vocabulary was used
  • should indicate what this research builds upon
  • should indicate (cite) related, or pre-cursor work
  • should present or discuss (short paragraph) some current (or proposed) work on the topic
  • should include citation references in a suitable format (i.e. indicating source)
  • this assignment has no grade associated with it but is required to be handed in

Instructions for presentation and sign up

  • 10 mins, NO longer, some questions or comments should be anticipated
  • slides are optional, if you use them, please post them alongside your name in the table below
  • please sign up in 10 minute blocks starting at 4:20
  • enrolled students (top of the list below) will start
  • we will follow with others who wish to participate
  • consider speaking on something you are preparing a web science abstract for or something you might propose to submit to IPAW

Student 10 minute research interests presentation sign-up sheet

Presenter   ↓ Time   ↓ Topic / Title   ↓ Presentation - include link to presentation   ↓ Notes/ Questions   ↓
Dominic DiFranzo 5:20 Castnet Demo (web 3.0 dev for gov data)    
Alvaro Graves   Interests and Experiences File:Agraves CSCI6965 presentation1.pdf  
Tim Lebo 4:50 Creating, Interpreting, and Re-purposing Visual Messages File:Lebo-creating-interpreting-repurposing.pptx.pdf  
Xian Li 5:30 Study Supreme Court Decision Making with Linked Data File:Linked data Justices.odp  
James McCusker 4:30 Representing Microarray Metadata using Provenance Models File:Representing Microarray Metadata using Provenance Models.pdf  
James R. Michaelis        
Evan Patton 5:00 Distributed Reasoning in Mobile, Social Web Applications File:EPatton-MobileReasoning.pdf  
Eric Rozell 4:40 Considerations for an E-Science Framework File:ERozell-EScienceFramework.pdf  
Zhenning Shangguan 5:10 Semantic Search Faceted Semantic Search  
Joshua Shinavier 5:50 Publishing Named Graph enabled Linked Data on the Web File:NGLinkedData.pdf  
Jin Guang Zheng 5:40 Consuming Linked Data File:Consuming linked data.pdf  

Week 3 February 9, 2010

The plan is to let everyone who wants to do a submission on their own or in a small group turn in an outline this week and have others provide constructive criticism on the outlines. Please make sure to post a link to your outlines included in the signup below. Registered class participants, please make sure to reach the posted outlines. Also, please note that Peter and Li who are covering the class will be checking to see that outlines are posted. Papers due march 8 to IPAW2010.

Also, finish reading the provenance survey paper mentioned in the last class:

Luc Moreau, The Foundations of Provenance on the Web, 2009. http://eprints.ecs.soton.ac.uk/18176/

Signup sheet for IPAW2010 (http://tw.rpi.edu/portal/IPAW2010/cfp) paper submission

Topic / Title   ↓ Leader   ↓ Members   ↓ Presentation Time   ↓ Outline of Paper   ↓ Notes/ Questions   ↓ Video   ↓
Creating, Interpreting, and Re-purposing Visual Messages   Tim Lebo   CSCI 6965 - lebo outline 9 feb 2010   lebo.mov
Publishing Named Graph enabled Linked Data on the Web   Joshua Shinavier   CSCI_6965_NGLinkedData_outline   shinavier.mov
Who Did That? Incorporating Provenance into a Semantic Application Framework Evan Patton Evan Patton, Dominic DiFranzo 4:20 Outline on Google Docs   patton.mov
Representing Microarray Experiment Metadata Using Provenance Models Jim McCusker Jim McCusker   File:Mage-provenance.pdf   mccusker.mov
Problems in Multi-source systems Alvaro Graves Alvaro Graves   CSCI_6965_-_alvaro_outline_9_feb_2010   graves.mov
Capturing Investor's Moods from the Web Xian Li Xian Li   CSCI6965-Xian Li   li.mov
Provenance-aware Faceted Search in Drupal Zhenning Shangguan Zhenning Shangguan, Jin Guang Zheng   Provenance-aware Faceted Search in Drupal    

Week 4 - February 23, 2010

Instructions for presentation and sign up

  • Every student who is enrolled should review another outline and present constructive comments.

We will work as a group to improve the outlines for submission as either a paper submission or as a demo/poster/position statement submission.

For this review, you need to review the IPAW conference call and then answer at least: 1 - is the outline on target for the conference? if not, what suggestions can you make to improve it being on target? 2 - what are the likely claims of the paper and are they clear? if not, what do you think would help make them clear? 3 - why should someone be interested in the work? (it may be a novel approach, it could be that others can reuse tools generated or a methodology, if it is a demo, then it could be a good example of provenance in action, etc.)

  • Please make sure that every one of the outlines has at least one review, so if you sign up first, you get your choice. If you sign up later, you may choose from only those that do not already have reviews announced. If all have reviews announced, then you may choose one for with one review is already announced.
  • presentations are to be about 10 minutes.
  • please sign up for blocks starting at 4:30
  • enrolled students (top of the list below) will start
  • we will follow with others who wish to participate
  • you may review more than one outline if you like. The class will discuss the outlines after the constructive criticism presentations.
  • if you have made updates to your previous outlines, you may upload the newer version for others to critique. Please leave up the old one as well though and distinguish which is which.

Student 10 minute research interests presentation sign-up sheet

Presenter   ↓ Time   ↓ Outline by Author - ex. Patton's outline   ↓ Presentation - include link to presentation   ↓ Notes/ Questions   ↓
Dominic DiFranzo 4:50 Zhenning Shangguan's Outline Review of Zhenning Shangguan's Outline  
Alvaro Graves 5:30 Xian's outline File:CSCI 6965 - al review of xian outline.pdf  
Tim Lebo 5:00 Jim McCusker's outline CSCI 6965 - lebo review of mccusker 23 feb 2010  
Xian Li 5:40 Evan Patton et al outline File:Critiques on Evan and Dominic's outline.odp  
James McCusker 5:50 Josh's outline File:NGLinkedData-critique-JPM.ppt  
Evan Patton 4:40 Tim Lebo's outline File:Visualization Critique.pdf  
Zhenning Shangguan 4:30 Jim McCusker's outline File:RepresentingMicroarrayExperimentMetadata.pdf  
Joshua Shinavier 5:20 Alvaro's outline File:JoshuaShinavier multisourceprovenance review.pdf  
Jin Guang Zheng 5:10 Alvaro's outline File:Critique on Alvaro’s Outline.ppt  

Notes

Notes taken from class [1]

Week 5 March 2, 2010

  • 30 minute research presentation (20 min presentation and 10 min questions)
  • 5 longer presentations on one most relevant paper from your research interests presentation (20+10mins each)
  • Please suggest 1 or 2 candidate papers no later than Friday Feb 26.
  • A good paper presentation idea is one that you would include in a related work section on a paper you are likely to submit.
  • Note when you are presenting, you need to include an overview of the work along with a required slide of why the class should be interested in this work.
  • Students who are not presenting this week must read two other papers and post at least one question on the wiki for each paper read.
Presenter Time Topic Presentation Notes/ Questions Citation students to read paper
* Zhenning Shangguan 4:30 LOD, SPARQL Slides (from Xian)With this URI prefetching approach, how is the relevancy between the starting graph and the graphs in later iteration guaranteed? And should relevancy be part of evaluation?
Comments from Shangguan: what does "relevancy" here mean? And also, what do you mean by "starting graph"? Basically, the proposed approach requires no "starting graph" in advance, but rather a URI in the first triple parttern of BGP. (Although it is always a good idea to fill the local queried dataset with some RDF graph from the hub nodes of LOD, e.g., DBpedia, etc.)

(from Dominic) This paper illustrates that the data retrieval times compared to the whole query execution time decreases for larger datasets, in particular in the case of the non-blocking iterators. Why is that?
Comments from Shangguan: Please note that the proposed algorithm has an implicit parallelism in it. As later iterators evaluate their triple pattern, there might be previous iterators doing the job of de-referencing URI and retrieving RDF graphs (they are doing so because some of the solution mappings they offered get rejected due to the unsatisfied requirement). As the size of the dataset scales up, the impact of parallelism becomes much more significant because there are potentially more threads doing RDF graph retrieval asynchronously to the query evaluation.

Executing SPARQL Queries over the Web of Linked Data Dominic, Xian
* Jin Guang Zheng 5:00 Search Web Data File:Semplore.ppt (from Xian)What's the potential drawback of the two principles to guide the design of the ranking mechanism? How does the system know when to use which principle?

Answer 1: Let's think about the case when a node has million's neighbors but all of it neighbors are not related to the search term, and another node has only few neighbor nodes, but those nodes are closely related to the search term. Thus, the user might want the second node, but in some case, the first node may have higher rank.

Answer 2: Actually, in this paper, both principles are used when they calculate the ranking. See rank slide.

(from Dominic) Since this is one of the few end-user focused semantic web applications, how might using data collected from user interaction with the system enable better results? We see modern search engines do this all the time. What advantages does structured data gives us in this respect?

Answer 1: First, for now, paper doesn't take advantage of the data collected from user since at the time paper published, there might not be a lot of user interaction involved. So to answer your question, the data collected from the user interactive definitely will help any search system, either through suggest queries, or for computing related results, or for ranking the results.

Answer 2: The paper is about search the structure data, it is not about using structure data to help the search.

Semplore: A scalable IR approach to search the Web of Data Dominic, Xian
* Evan Patton 5:30 Peer-to-Peer Reasoning File:DistReasonP2P.pdf (from Xian) Since the partitioning is done by the hash keys using DHT, then how does this hashing mechanism impact the performance of distributed reasoning?

(from Dominic) The proofs for most of the algorithms in the p2p work work only if P2PIS do not change while the algorithm is running. Is this a safe assumption in a real environment? How and in what ways does this limit the effectiveness of this system?

Distributed reasoning in a peer-to-peer setting

(see also Scalable distributed ontology reasoning using DHT-based partitioning)

Dominic, Xian
* Joshua Shinavier 6:00 Named Graphs in Linked Data File:BiologicalDataWebs.pdf Zhao2008provenance lebo question

(from Jim) What prevents this method from being generally applied to supplying provenance about named graphs?

Provenance and Linked Data in Biological Data Webs Tim, Jim McCusker
* Alvaro Graves 6:30 Trust File:2007TrustSurvey.pdf Artz2007survey lebo question

(from Jim) This paper points to a number of different models of trust and priorities of trust. It seems that the priorities of trust vary greatly depending on the domain and the user. For instance, I place less emphasis on authoritative trust than on verifiable content. This extends beyond science, and tends to be my default mode of trust. On the other hand, other people, using the same systems, may want something that is authoritative, but are less concerned with how the information was created. Is there any hope of bridging this divide within a single framework?

Answer: My impression is that the authors tried to cover a very broad range of concepts and name them as "trust". While most of them are related to what I understand as trust, some ca be cataloged more like "security" (e.g., authentication). Is in this sense that I don't think is possible to unify all the concepts mentioned here. Also, at an implementation level, the systems mentioned attack different problems with different audience in mind.

A survey of trust in computer science and the Semantic Web Jim M, Tim

Alvaro and Jin need to put up paper asap!

Dominic, Tim, Xian, Jim McCusker to review papers and post 2 questions. We need the other papers to do the paper reading assignment!

Week 6 March 16, 2010 (Note Spring break on March 9)

  • Continuation of research paper presentation.

Xian, Dominic, Tim, Jim - please suggest a paper to Deborah and post the pdf by Friday March 5.

 

Presenter Time Topic Presentation Notes/ Questions Citation students to read
* James McCusker 4:30 Semantic Data Model Integration File:Cabig model matching and reuse.pdf Question from Evan: This article seems to be advocating the need for centralized systems to solve the ontology mapping problem. However, they also mention that centralization of this information can cause combinatorial explosions in processing time due to having so much information being considered even though some of it might be irrelevant. Do you think that modularization efforts to break down ontologies into useful components might be a better compromise in this situation? Also, this seems like it works better within specialized domains. I feel that if you used a similar method on a broader range of topics the success rate would be greatly decreased due to many words having the same meaning (discussed within the paper) and some words being used many different ways across domains. Do you think tools such as Wordnet and LSA could improve on the method outlined by the paper?

Questions from Alvaro: What do you think of using BLAST for this (following its similarity with Smith-Waterman? Another question: the idea of using weights, although allow flexibility, may improve the matches for certain models but worsen others?

Metadata mapping and reuse in caBIG Josh, Evan, Alvaro
* Dominic DiFranzo 5:00 Social Machines slides Question from Evan: I believe that user context in SW applications will be pivotal in encouraging application developers to move from 'Web 2.0' to using the Semantic Web, and some of the given examples touch on having machines understanding law in context with a situation. Yet, there are many loopholes and exceptions to practically any established law. Since the logics we use today are very much structured around truth/falsehood, there seems to be no feasible way of encoding law without resulting in some inconsistent knowledge. Given your work with data.gov, do you think we should expect any effort from the government to capture law in pure logic?

Question from Jin: This paper is focus on how semantic web provides support for social machines, and discusses research challenges in creating social machines. The paper gives some example of using semantic web as a foundation for social machine. So since link data is one of the most important component of the semantic web, in your opinion, how would link data help to advent social machine? How would social machine help consuming link data?

Questions from Alvaro: What are, in your opinion, the basic principles/requirements for a community to be considered a Social Machine? What is the difference (if any) between them? From the cognitive perspective, the ability to consider the Web as an "extension" of the brain can lead to several questions (I have a couple in mind). What do you think are the bigger issues (social, legal, technical) that Social Machines will have to deal with?

From the Semantic Web to social machines: A research challenge for AI on the World Wide Web Evan, Jin, Alvaro
* Xian Li 5:30 Social Provenance File:XianLi W6 SNTrust .pdf Question from Jin:

The work present in this paper is using trust value computed in social network to prioritize default rules in a system. I am curious how will this system solve the problem if there are 2 derived default rules that happen to have same priority and these 2 defaults are conflict to each other? As example: A trust C with trust value 0.5, A trust D with trust value 0.5. C says tomorrow will be a sunny day, but D says tomorrow will be a cloudy day.

Question from Shangguan: How do you think the scalability of the proposed algorithm will be? Any ideas on how other graph analysis methods, e.g., centrality, can help improve the algorithm?

Using Social Network-based Trust For Default Reasoning On The Web (see also Combining Provenance with Trust in Social Networks for Semantic Web Content Filtering) Shangguan, Josh, Jin
* Tim Lebo 6:00 Visualization File:Lebo-kunde-munzner-16mar2010.pptx.pdf Question from Jin:

By reading this paper, it seems the authors are creating a guidelines/model for visialization design that aims to evaluate the visualization system. But there are significant previous work been done in creating such a model as discussed in comparison section, and the model propose in this paper is much similiar to most of the existing model. Why should we use this model to evaluate a visualization design instead the model used to design the visualization if this 4 layer nested model are very similiar to the model used to design the visualization?

Question from Evan: Since the author's goal is to establish a more formal process for constructing and evaluating visualizations, do you think it would be worthwhile to algorithmically encode this process using the Semantic Web? You may want to consider incorporating these ideas into the system you're working on for the VO workshop.

Question from Shangguan: It seems to me that the proposed model is heavily domain dependent (although the model itself is generic) because it starts from "Domain Problem and Data Characterization". Do you believe it can also be applied to applications that involves data from different domains, and sometimes with the designers unable to know them in advance (which IMHO is sometimes true when building linked data apps)?

A Nested Model for Visualization Design and Validation (through rpi's acm library) and Requirements for a Provenance Visualization Component (through rpi's springer library) Jin, Evan, Shangguan

Note - no questions posted from Josh. Others posted questions

Week 7 March 23, 2010

  • Survey introduction from Jim Hendler
  • Individual Group discussions on appropriate survey materials
  • Prior to class, meet with individual groups and choose 3 papers that your group will read along one theme.
  • Email Deborah confirming when the 3 of you met including your suggestion on the theme and the 3 papers by Sun. Mar 21 at 6pm eastern.
  • After approval, post both the theme and the 3 papers (with citations and pointers) on the class web site by Mon Mar 22, 6pm eastern time.
  • The following week, the group will present an integration of these 3 papers highlighting how semantic technologies are providing contributions to the work.
  • After Jim's presentation groups will meet to discuss what should be added to those 3 papers to do an appropriate survey (such as the related work chapter you need to do in a thesis).
  • Group 1: Shangguan, Jin, Josh - Search / Linked Data
  • Group 2: Jim, Tim, Alvaro - Provenance / Trust
  • Group 3: Xian, Dominic, Evan - Social Collaborative Information Spaces

 

Presenter Time Topic Presentation Notes/ Questions Citation students to read
* James McCusker, Alvaro, and Tim 6:30 Provenance File:McCusker Lebo Graves ProvenanceTrustSlides.pdf questions cruz2009towards Zheng2002trust Miles2007requirements students to read
* Dominic DiFranzo, Evan Patton, Xian Li 4:30 Social Semantic Collaborative Spaces File:SSCS.pdf questions Social SW for eScience

Semantic wiki Games with a Purpose

students to read
* Joshua Shinavier, Zhenning Shangguan, Jin Guang Zheng 5:00 Semantic Web search File:SemanticWebSearch.pdf questions Sindice.com: Weaving the Open Linked Data

microsearch: a interface to semantic search

VisiNav: Visual Web Data Search and Navigation

students to read

Week 8 March 30, 2010

Each of the 3 3-person groups will present the three papers your have chosen. Each member of the group should present some portion of your group presentation. Besides the technical summaries, each presentation should include a slide on

  • Team members and for each member, what they contributed
  • Emerging themes from the 3 papers, highlighting any (important) disagreements across the papers
  • Highlights of how semantic technologies are providing contributions to the work
  • Some ideas of how the work could be enhanced with the kind of technologies we are strong in
  • Each group presentation will be 20 minutes with up to 10 minutes for discussion

Week 9 April 6, 2010

Groups will present proposals for projects Make sure to include at least one slide on

(a) The problem you are addressing

(b) The general plan for a solution

(c) Anticipated Semantic technologies that will be used

(d) One or more examples (just in powerpoint) of a benefit of semantic technologies in one grounded example showing something that would be hard to do without semantic technologies. For example, if you are doing search, tell us an example search query, show how semantic technologies help find the answer(s), show what the answers are, and describe how this is better than without semantic technologies.

(e) anticipated claims that you expect to include in your write up.

(f) anticipated roles and responsibilities for each team member

Each group should expect to present for 20 minutes. As you plan your presentations, please allocate time to each speaker so you know how long each person has for each segment.Discussion will be after the 20 minute presentation. Note that the plan may evolve after discussion. This is OK (and is not uncommon in research projects).

Presenters Time Topic Presentation
* Joshua Shinavier, Zhenning Shangguan, Jin Guang Zheng 4:00 Real-time semantic search on the Twitter stream File:Real-timeSemanticSearch.pdf
* Alvaro Graves, Tim Lebo, James McCusker 4:30 Behind the data: Describing provenance in applications File:6-apr-2010-emerging-provenance.odp.pdf

File:6-apr-2010-emerging-provenance.graffle.pdf

* Dominic DiFranzo, Xian Li, Evan Patton 5:00 Collaborative Application Building with Social Media File:SocialSemantiCollaboration.pdf

Week 10 April 13, 2010

Groups provide short presentations on project architecture. Outline for related work survey for this project; show how this outline is driven by the architecture and design. What are the papers that are most related? and how does your work differ? These questions will need to be answered in detail in the final writeup. Please identify a couple in the presentation. presentation length - up to 20 minutes.

Presenters Time Topic Presentation
* Joshua Shinavier, Zhenning Shangguan, Jin Guang Zheng 4:00 Real-time semantic search on the Twitter stream File:RealtimeSemanticSearchTwitterStream survey.pdf
* Alvaro Graves, Tim Lebo, Jim McCusker, James Michaelis 6:00 Behind the data: Describing provenance in applications File:Provenance in applications.pdf
* Dominic DiFranzo, Xian Li, Evan Patton 5:00 Access Controls in SAF File:SAF Architecture.pdf

Week 11 April 20, 2010

Guest speaker - Harry Halpin - http://www.ibiblio.org/hhalpin/ Please read his paper - "Relevance Feedback Between Hypertext and Semantic Web Search" - and post questions. http://tw.rpi.edu/portal/File:SemSearch.pdf

http://www.ibiblio.org/hhalpin/homepage/presentations/rpi2010/


From Evan: Have you done any tests of robustness on your approach? When the search engine changes its ranking algorithm, would it be necessary to recompute the relevance feedback for every result, or can you expedite the process by focusing on concepts where a relevant page has entered/left the top 10 results?


From Tim:

The central thesis of this paper is that relevance feedback can be a primary method to leverage structured searches to benefit unstructured searches, and vice versa.

Although the human judges agreed on the relevancy of results, did their relevancy ratings show a preference for hypertext over semantic web? I would expect the hypertext to be more approachable for the judges, which would lead to a measurable difference in their relevancy ratings.

Relevance feedback is automatically generated from one query source and used to modify a language model to rank query results from a second source. In this case, the paper is using query sources falcons and Yahoo!. However, do you think the results would change if the semantic web search engine (falcons) was replaced with any other traditional hypertext search engine? What benefit does the semantic web data provide if it is only being used, as the paper puts it, as a bag of words? Although the semantic search engine may employ different (non-page rank) techniques, the fact that semantic sources like DBPedia are based on hypertext sources to begin with leads to a homogeneity of content across semantic web and hypertext web.

During feedback evaluation and parameter exploration, did you use the same 4000 result documents that were evaluated by the human judges? If so, how can results improve if you are just reordering the original 10 documents per query? If not, how did you get precision calculations without another round of human judging, and how did you convince the search engine to incorporate your language model when performing the query?


Jin:

The main focus of this paper is to show that using relevance feedback technique we can improve the search of unstructured data by using feedbacks from structured data and vice versa. I was suprised that Semantic Web inferencing actually hurts the performance. I understand that sometime that inferenced data will generate irrelevant terms, but in some cases, inferencing can definitely help the search, for example, if I search all restaurants at MA, I will get restaurants at Boston. So do you think there is any way we can reduce the noise produced by inferencing, and any other way we can improve the search by using inference?


Week 12 April 27, 2010

individual group meetings.

Week 13 May 4, 2010

Group 1 (Shangguan, Jin, Josh) and Group 3 (Xian, Dominic, Evan) Final project presentation.

 
Evan,Dominic,Xian File:SAF Access Control.pdf

Project presentations should include information addressing each of the sections to be included in the final report. It should include a demonstration system (which if presented on May 4 rather than May 11, may include information about how the final implementation will behave on or before May 11). Presentation slots at 1 hour total but that should include up to 15 minutes of discussion so plan to present for 45 minutes.

Week 14 May 11, 2010

Group 2 (Jim, Tim, Alvaro) Final project presentation. All groups have write ups due.

Each group needs to submit one group writeup. It should include the sections typical to any research paper submission. Abstract, Introduction - including motivating use cases, Technical Approach (with appropriate subsections for your topic), Discussion, Related Work, Future Work, Conclusion, References. You should also include some discussion of how evaluation could be done. Additionally you need to include roles and responsibilities for each of the team participants.

Academic Integrity

Student-teacher relationships are built on trust. For example, students must trust that teachers have made appropriate decisions about the structure and content of the courses they teach, and teachers must trust that the assignments that students turn in are their own. Acts, which violate this trust, undermine the educational process. The Rensselaer Handbook of Student Rights and Responsibilities defines various forms of Academic Dishonesty and you should make yourself familiar with these. In this class, all assignments that are turned in for a grade must represent the student’s own work. In cases where help was received, or teamwork was allowed, a notation on the assignment should indicate your collaboration. Submission of any assignment that is in violation of this policy will result in a penalty. If found in violation of the academic dishonesty policy, students may be subject to two types of penalties. The instructor administers an academic (grade) penalty, and the student may also enter the Institute judicial process and be subject to such additional sanctions as: warning, probation, suspension, expulsion, and alternative actions as defined in the current Handbook of Student Rights and Responsibilities. of an academic grade penalty or . If you have any question concerning this policy before submitting an assignment, please ask for clarification.

Attendance Policy

Enrolled students may miss at most one class without permission of instructor.


Course: Emerging Trends in Semantic Technology

Date: to