ACHP
Description
structure in first source
Structure
3D Structure
Properties
IUPAC Name |
2-amino-6-[2-(cyclopropylmethoxy)-6-hydroxyphenyl]-4-piperidin-4-ylpyridine-3-carbonitrile | |
|---|---|---|
| Source | PubChem | |
| URL | https://pubchem.ncbi.nlm.nih.gov | |
| Description | Data deposited in or computed by PubChem | |
InChI |
InChI=1S/C21H24N4O2/c22-11-16-15(14-6-8-24-9-7-14)10-17(25-21(16)23)20-18(26)2-1-3-19(20)27-12-13-4-5-13/h1-3,10,13-14,24,26H,4-9,12H2,(H2,23,25) | |
| Source | PubChem | |
| URL | https://pubchem.ncbi.nlm.nih.gov | |
| Description | Data deposited in or computed by PubChem | |
InChI Key |
DYVFBWXIOCLHPP-UHFFFAOYSA-N | |
| Source | PubChem | |
| URL | https://pubchem.ncbi.nlm.nih.gov | |
| Description | Data deposited in or computed by PubChem | |
Canonical SMILES |
C1CC1COC2=CC=CC(=C2C3=NC(=C(C(=C3)C4CCNCC4)C#N)N)O | |
| Source | PubChem | |
| URL | https://pubchem.ncbi.nlm.nih.gov | |
| Description | Data deposited in or computed by PubChem | |
Molecular Formula |
C21H24N4O2 | |
| Source | PubChem | |
| URL | https://pubchem.ncbi.nlm.nih.gov | |
| Description | Data deposited in or computed by PubChem | |
DSSTOX Substance ID |
DTXSID401025613 | |
| Record name | 2-Amino-6-[2-(cyclopropylmethoxy)-6-oxo-2,4-cyclohexadien-1-ylidene]-1,6-dihydro-4-(4-piperidinyl)-3-pyridinecarbonitrile | |
| Source | EPA DSSTox | |
| URL | https://comptox.epa.gov/dashboard/DTXSID401025613 | |
| Description | DSSTox provides a high quality public chemistry resource for supporting improved predictive toxicology. | |
Molecular Weight |
364.4 g/mol | |
| Source | PubChem | |
| URL | https://pubchem.ncbi.nlm.nih.gov | |
| Description | Data deposited in or computed by PubChem | |
CAS No. |
1844858-31-6 | |
| Record name | 2-Amino-6-[2-(cyclopropylmethoxy)-6-oxo-2,4-cyclohexadien-1-ylidene]-1,6-dihydro-4-(4-piperidinyl)-3-pyridinecarbonitrile | |
| Source | EPA DSSTox | |
| URL | https://comptox.epa.gov/dashboard/DTXSID401025613 | |
| Description | DSSTox provides a high quality public chemistry resource for supporting improved predictive toxicology. | |
Foundational & Exploratory
A Technical Guide to the ACHP Guidelines for Archaeological Site Identification and Evaluation
This guide provides an in-depth overview of the frameworks and methodologies stipulated by the Advisory Council on Historic Preservation (ACHP) for the identification and evaluation of archaeological sites, primarily within the context of Section 106 of the National Historic Preservation Act (NHPA).[1][2] It is intended for researchers, scientists, and cultural resource management professionals involved in land use planning and development.
Introduction to Section 106 and the Role of the this compound
Section 106 of the NHPA requires federal agencies to consider the effects of their projects on historic properties.[2][3] The this compound is an independent federal agency that promotes the preservation, enhancement, and productive use of our nation's historic resources and advises the President and Congress on national historic preservation policy. The this compound's regulations implement Section 106 and guide federal agencies in fulfilling their responsibilities. Archaeological sites are a key component of these historic properties, and their identification and evaluation are critical steps in the Section 106 review process. It is estimated that over 90% of archaeological excavations in the United States are conducted in compliance with Section 106.
The process of identifying and evaluating archaeological sites is not intended to be an exhaustive search for every artifact of the past. Instead, federal agencies are required to make a "reasonable and good faith effort" to identify historic properties, including archaeological sites that are listed on or eligible for listing on the National Register of Historic Places, within a project's Area of Potential Effects (APE).
Core Concepts in Archaeological Site Identification and Evaluation
Several key concepts, defined by the National Park Service and integral to the this compound guidelines, underpin the process of identifying and evaluating archaeological sites.
| Term | Definition | Source |
| Archaeological Site | A location that contains the physical evidence of past human behavior that allows for its interpretation. | National Register Bulletin No. 36 |
| Historic Property | Any prehistoric or historic district, site, building, structure, or object included in, or eligible for inclusion on, the National Register of Historic Places. | National Historic Preservation Act |
| Significance | The importance of a property, measured by its ability to meet one or more of the four National Register criteria (A-D). | National Register Bulletin No. 15 |
| Integrity | The ability of a property to convey its significance through its physical features and context. This includes seven aspects: location, design, setting, materials, workmanship, feeling, and association. | National Register Bulletin No. 15 |
| Area of Potential Effects (APE) | The geographic area or areas within which an undertaking may directly or indirectly cause alterations in the character or use of historic properties. | 36 CFR § 800.16(d) |
Methodologies for Archaeological Site Identification (Phase I)
The initial stage of locating archaeological sites is the Phase I investigation. This involves a combination of background research and fieldwork designed to identify resources and define site boundaries within the APE.
Background Research
Before fieldwork commences, a thorough literature review is essential. This includes:
-
Record Checks: Consulting the appropriate state historic preservation office (SHPO) or tribal historic preservation office (THPO) for records of previously identified historic properties and surveys.
-
Review of Pertinent Materials: Examining historical maps (such as Sanborn maps and historic topographic maps), aerial photographs, and gray literature (unpublished archaeological reports).
-
Local Sources: Gathering information from local historical societies, public libraries, and through informant interviews.
-
Predictive Modeling: In areas where the archaeology is well-known, predictive models may be used to identify locations with a high probability of containing archaeological sites.
Field Survey Methods
The selection of field methods depends on the specific environmental conditions and the nature of the project.
| Method | Description | Application |
| Pedestrian Survey | Systematic walking of the APE in transects to visually identify artifacts and surface features. | Areas with good ground surface visibility. |
| Shovel Test Pits (STPs) | Excavation of small, standardized pits at regular intervals along a grid to identify subsurface artifacts. Soil is typically screened through 1/4-inch mesh. | Areas with low surface visibility or where buried sites are expected. |
| Augering/Coring | Use of a hand or mechanical auger to examine deeper soil stratigraphy and identify buried cultural layers. | Environments with significant soil deposition. |
| Mechanical Trenching | Excavation of trenches with a backhoe to expose soil profiles and identify deeply buried sites. | Used judiciously in areas with high potential for deeply buried, significant deposits. |
| Remote Sensing | Non-invasive techniques such as ground-penetrating radar (GPR), magnetometry, and soil resistivity to detect subsurface anomalies that may represent archaeological features. | To identify features without excavation and to guide the placement of test units. |
The following diagram illustrates the workflow for a Phase I archaeological investigation.
References
Navigating the Nexus of Science and Heritage: A Technical Guide to the Section 106 Process for Researchers
For Immediate Release
Washington, D.C. – For researchers, scientists, and drug development professionals whose work intersects with federal lands, funding, or permits, a critical but often overlooked regulatory framework is Section 106 of the National Historic Preservation Act (NHPA). This in-depth guide provides a technical overview of the Section 106 process, offering clarity on its requirements, from initial project conception to the resolution of potential impacts on historic properties. Understanding this process is crucial for ensuring compliance and the successful execution of scientific endeavors.
The Core of Section 106: A Consultation-Based Process
Section 106 of the NHPA mandates that federal agencies consider the effects of their undertakings on historic properties.[1][2][3][4][5] An "undertaking" is broadly defined as any project, activity, or program funded in whole or in part under the direct or indirect jurisdiction of a federal agency, including those requiring a federal permit, license, or approval. For the scientific community, this can encompass a wide range of activities, from archaeological excavations to biological surveys and the installation of monitoring equipment on federal lands.
The Section 106 process is not designed to halt projects but to encourage a consultative approach to identify, assess, and seek ways to avoid, minimize, or mitigate any adverse effects on historic properties. Historic properties are those that are listed in or eligible for listing in the National Register of Historic Places.
The key participants in the Section 106 process include:
-
The Federal Agency: The lead agency responsible for the undertaking and for initiating and overseeing the Section 106 process.
-
State Historic Preservation Officer (SHPO): Appointed by the governor in each state, the SHPO and their staff have expertise in the historic resources of their state and consult with federal agencies.
-
Tribal Historic Preservation Officer (THPO): Federally recognized Native American tribes may have a THPO who assumes the responsibilities of the SHPO on tribal lands. Government-to-government consultation with tribes is a critical component of the process.
-
The Advisory Council on Historic Preservation (this compound): An independent federal agency that oversees the Section 106 process and may participate in consultation, particularly in complex or contentious cases.
-
Consulting Parties: These can include local governments, applicants for federal assistance or permits, and other individuals and organizations with a demonstrated interest in the undertaking.
-
The Public: The public must be given an opportunity to provide their views and concerns.
The Four-Step Section 106 Process
The Section 106 process is a sequential, four-step framework designed to ensure that historic preservation is considered in project planning.
Step 1: Initiate the Process
The federal agency first determines if its proposed action constitutes an "undertaking" with the potential to affect historic properties. If so, the agency identifies the appropriate SHPO/THPO and other potential consulting parties and plans for public involvement.
Step 2: Identify Historic Properties
The agency, in consultation with the SHPO/THPO, defines the "Area of Potential Effects" (APE), which is the geographic area within which an undertaking may directly or indirectly cause changes to the character or use of historic properties. A reasonable and good faith effort must be made to identify any historic properties within the APE. This often involves background research, field surveys, and consultation with knowledgeable parties.
Step 3: Assess Adverse Effects
If historic properties are identified within the APE, the agency, in consultation with the SHPO/THPO and other consulting parties, assesses whether the undertaking will have an "adverse effect." An adverse effect occurs when an undertaking may alter, directly or indirectly, any of the characteristics of a historic property that qualify it for inclusion in the National Register in a manner that would diminish the integrity of the property.
Step 4: Resolve Adverse Effects
If it is determined that the undertaking will have an adverse effect, the federal agency must consult with the SHPO/THPO and other consulting parties to seek ways to avoid, minimize, or mitigate the adverse effects. This consultation can result in a Memorandum of Agreement (MOA) or a Programmatic Agreement (PA), which are legally binding documents outlining the agreed-upon measures.
Data Presentation: Understanding the Metrics of Section 106
While comprehensive, nationwide quantitative data on the precise timelines and costs of the Section 106 process is not systematically collected and publicly available, some insights can be gleaned from various sources. It is important to note that costs and timelines are highly project-specific and depend on the complexity of the undertaking and the nature of the historic properties involved.
| Metric | Available Data & Observations | Caveats |
| Typical Timelines | The SHPO/THPO generally has 30 days to review a federal agency's findings and determinations at each step of the process. The overall timeline can range from a few weeks for simple projects with no adverse effects to a year or more for complex projects requiring extensive consultation and mitigation. | These are general timeframes and can be extended by requests for additional information or prolonged consultation. |
| Costs | The cost of archaeological investigations is highly variable. For federally funded projects, the responsible federal agency typically covers the costs of surveys and mitigation. For federally permitted projects, the applicant is usually responsible. The Moss-Bennett Act suggests that for projects over $50,000, mitigation costs for cultural resources are often around 1% of the total project cost. | This 1% figure is a guideline and not a strict rule. Costs can fluctuate significantly based on the scope of work required. |
| Outcomes | The majority of Section 106 reviews do not result in a finding of adverse effect. When adverse effects are identified, resolution is typically achieved through a Memorandum of Agreement (MOA). The process is designed to be collaborative and does not mandate a specific preservation outcome. | Data on the specific outcomes of all Section 106 agreements (e.g., number of properties preserved vs. mitigated through data recovery) is not centrally compiled. |
Experimental Protocols: Methodologies in Section 106 Investigations
For many scientific researchers, particularly in the field of archaeology, the Section 106 process necessitates specific fieldwork and reporting. The following are detailed methodologies for the phased archaeological investigations commonly required for compliance.
Phase I: Archaeological Identification Survey
Objective: To determine the presence or absence of archaeological resources within the Area of Potential Effects (APE).
Methodology:
-
Archival Research: A thorough review of existing records, including state archaeological site files, historic maps, land use records, and previous cultural resource surveys in the vicinity.
-
Field Reconnaissance: A systematic pedestrian survey of the APE. The intensity of the survey is determined by factors such as ground visibility and the potential for buried sites.
-
Surface Inspection: In areas with good ground visibility, the surface is systematically walked in transects to identify artifacts or surface features.
-
Subsurface Testing: In areas with low surface visibility, shovel test pits (STPs) are excavated at regular intervals along a grid. STPs are typically 30-50 cm in diameter and are excavated to sterile subsoil. All excavated soil is screened through 1/4-inch mesh to recover artifacts.
-
-
Documentation: All findings, including negative results, are thoroughly documented with notes, photographs, and maps. Any identified archaeological sites are recorded on state-specific site forms.
-
Reporting: A comprehensive report is prepared that details the research design, field methodology, findings, and recommendations for further investigation if necessary.
Phase II: Archaeological Evaluation
Objective: To determine if an identified archaeological site is eligible for listing in the National Register of Historic Places (NRHP).
Methodology:
-
Development of a Research Design: A formal research design is created to guide the evaluation, outlining the specific questions that will be addressed.
-
More Intensive Fieldwork:
-
Controlled Surface Collection: A systematic collection of artifacts from the surface of the site to understand the types and distribution of cultural materials.
-
Test Unit Excavation: The excavation of larger, formal test units (e.g., 1x1 meter or 2x2 meter squares) to investigate the vertical and horizontal extent of cultural deposits, identify features, and assess the integrity of the site. Excavation proceeds in natural or arbitrary levels, and all soil is screened.
-
Geophysical Surveys: Non-invasive techniques such as ground-penetrating radar (GPR), magnetometry, or electrical resistivity may be used to identify subsurface features without excavation.
-
-
Artifact Analysis: Recovered artifacts are cleaned, cataloged, and analyzed to determine their age, function, and cultural affiliation.
-
Reporting: A detailed report is prepared that presents the results of the fieldwork and analysis and provides a formal determination of the site's eligibility for the NRHP.
Phase III: Archaeological Data Recovery (Mitigation)
Objective: To mitigate the adverse effects of an undertaking on a significant archaeological site by recovering the important information it contains before it is destroyed.
Methodology:
-
Data Recovery Plan: A detailed plan is developed in consultation with the SHPO/THPO and other consulting parties that outlines the research questions, excavation strategy, sampling methods, and analysis plan.
-
Large-Scale Excavation: Extensive, systematic excavation of the portions of the site that will be impacted by the project. This may involve the use of heavy machinery to remove sterile overburden, followed by careful hand excavation of cultural layers and features.
-
Specialized Analyses: In addition to standard artifact analysis, specialized studies may be conducted, such as radiocarbon dating, faunal and floral analysis, soil chemistry, and geoarchaeology.
-
Curation: All recovered artifacts and records are prepared for long-term curation at an approved facility.
-
Public Outreach: Mitigation often includes a public outreach component, such as exhibits, publications, or educational programs.
-
Final Report: A comprehensive final report is prepared that presents the results of the data recovery and contributes to our understanding of the past.
Section 106 and Non-Archaeological Scientific Research
While archaeology is the most common scientific discipline involved in Section 106, other fields of research can also trigger the process. Any scientific activity that is considered a federal "undertaking" and has the potential to affect historic properties is subject to review. This could include:
-
Biological and Ecological Surveys: Research involving ground disturbance, such as the installation of monitoring wells, soil sampling, or the establishment of long-term study plots in areas with potential archaeological or cultural significance.
-
Geological Investigations: Projects that involve trenching, drilling, or other subsurface disturbances.
-
Installation of Scientific Equipment: The construction of towers, weather stations, or other research infrastructure that could have a visual or physical impact on historic properties or landscapes.
For these disciplines, the Section 106 process follows the same four steps. The key is early consultation with the lead federal agency and the SHPO/THPO to determine if the research activities constitute an undertaking and to define the APE. In many cases, a programmatic agreement can be developed for repetitive research activities to streamline the review process.
Mandatory Visualizations
To further elucidate the Section 106 process, the following diagrams illustrate key workflows and relationships.
Caption: A high-level overview of the four-step Section 106 workflow.
Caption: The communication pathways between key parties in the Section 106 process.
Caption: The logical decision-making process for assessing adverse effects.
By understanding the intricacies of the Section 106 process, scientific researchers can better navigate their responsibilities, foster positive relationships with regulatory agencies and consulting parties, and contribute to both the advancement of knowledge and the preservation of our nation's rich cultural heritage. Early and proactive engagement is the key to a successful and efficient review.
References
- 1. dot.alaska.gov [dot.alaska.gov]
- 2. Frequently Asked Questions About Lead Federal Agencies in Section 106 Review | Advisory Council on Historic Preservation [this compound.gov]
- 3. This compound.gov [this compound.gov]
- 4. This compound Regulations Implementing Section 106 | DSC Workflows - (U.S. National Park Service) [nps.gov]
- 5. allstarecology.com [allstarecology.com]
Navigating the Future of Preservation: A Technical Guide to the ACHP's Research Priorities
For Immediate Release
Washington, D.C. - The Advisory Council on Historic Preservation (ACHP) has outlined a forward-looking agenda that, while not rooted in traditional laboratory science, sets a clear course for research and development in the heritage sector. This technical guide provides researchers, scientists, and preservation professionals with an in-depth look at the this compound's strategic direction, translating their policy objectives into actionable research priorities. The focus is on leveraging technology, addressing climate change, and ensuring a more inclusive and equitable approach to preserving the nation's diverse heritage.
The this compound's role is not that of a scientific research agency but a policy and oversight body.[1][2][3] Therefore, this guide synthesizes the council's strategic goals into a framework that can inform and guide scientific and technical research in the field of heritage preservation. The following sections detail these priorities, offering a roadmap for innovation in preservation practices.
Core Research and Development Pillars
The this compound's strategic objectives can be distilled into three core pillars for research and development in heritage science and preservation:
-
Climate Change and Environmental Sustainability: Developing and implementing strategies to mitigate the impacts of climate change on historic properties is a paramount concern.[4][5]
-
Technological Advancement and Digital Integration: Harnessing modern technology to improve the efficiency and effectiveness of preservation processes is a key goal.
-
Equity and Inclusive Preservation: Ensuring that the national historic preservation program reflects the full diversity of the American story is a fundamental objective.
These pillars are interconnected and represent a holistic approach to the future of historic preservation.
I. Climate Change and Environmental Sustainability
The this compound has formally recognized the escalating threat of climate change to historic properties and has issued a policy statement to guide federal agencies and other stakeholders. Research in this area should focus on developing innovative and practical solutions for climate adaptation and mitigation.
Key Research Priorities:
| Priority Area | Research Focus | Potential Methodologies |
| Material Science and Conservation | Development of new materials and techniques for the conservation of historic materials exposed to extreme weather events (e.g., increased precipitation, temperature fluctuations, saltwater intrusion). | Accelerated weathering studies, non-destructive testing and evaluation, development of reversible and compatible conservation treatments. |
| Energy Efficiency and Renewable Energy Integration | Investigating and promoting the sensitive integration of renewable energy technologies and energy efficiency upgrades in historic buildings without compromising their historic character. | Building performance simulation, life cycle assessment, case study analysis of successful retrofitting projects. |
| Climate Adaptation Strategies | Researching and developing best practices for adapting historic properties and cultural landscapes to the impacts of climate change, such as sea-level rise, flooding, and wildfires. | Vulnerability assessments, risk mapping, development of adaptation planning frameworks, documentation and analysis of traditional and indigenous knowledge of resilience. |
Experimental Workflow: Climate Adaptation Strategy Development
The following diagram illustrates a potential workflow for developing and implementing climate adaptation strategies for a historic coastal property.
II. Technological Advancement and Digital Integration
The this compound advocates for the use of modern technology and digital tools to streamline preservation processes, particularly the Section 106 review, and to enhance public engagement. Research in this area should focus on the development and application of innovative digital technologies for the documentation, analysis, and management of heritage resources.
Key Research Priorities:
| Priority Area | Research Focus | Potential Methodologies |
| Digital Documentation and Survey | Advancing the use of remote sensing, GIS, 3D laser scanning, and photogrammetry for the rapid and accurate documentation of historic properties and cultural landscapes. | Development of standardized data acquisition and processing protocols, integration of digital data into preservation planning and review processes. |
| Data Management and Accessibility | Creating robust and interoperable digital platforms for the management, sharing, and long-term preservation of heritage data. | Development of linked open data models, cloud-based data management systems, and public-facing web portals for accessing heritage information. |
| Predictive Modeling and Analysis | Utilizing data analytics and machine learning to model the potential impacts of development projects on historic resources and to identify areas at risk from environmental and other threats. | Development of spatial analysis models, machine learning algorithms for feature recognition, and decision support tools for preservation planning. |
Logical Relationship: Integrated Heritage Data Management
The following diagram illustrates the logical relationship between different components of an integrated digital heritage management system.
III. Equity and Inclusive Preservation
A central theme in the this compound's recent strategic planning is the commitment to a more equitable and inclusive preservation practice that recognizes the full range of the nation's heritage. Research in this area should focus on developing methodologies and frameworks for identifying, documenting, and preserving the heritage of underrepresented communities.
Key Research Priorities:
| Priority Area | Research Focus | Potential Methodologies |
| Community-Based Heritage Documentation | Developing and implementing participatory methods for documenting the tangible and intangible heritage of diverse communities. | Oral history, community mapping, collaborative digital storytelling, ethnographic research. |
| Re-evaluation of Significance Criteria | Critically examining and proposing revisions to existing criteria for evaluating the significance of historic properties to ensure they are more inclusive of diverse cultural values and narratives. | Comparative analysis of national and international significance criteria, case study analysis of properties associated with underrepresented communities. |
| Equitable Mitigation and Community Benefits | Investigating and promoting mitigation strategies for adverse effects on historic properties that provide direct and tangible benefits to affected communities. | Development of community benefit agreement frameworks, research on the socio-economic impacts of preservation in diverse neighborhoods. |
Signaling Pathway: Achieving Equitable Preservation Outcomes
The following diagram illustrates the signaling pathway from policy to equitable outcomes in historic preservation.
Conclusion
While the Advisory Council on Historic Preservation does not conduct scientific research in a traditional sense, its strategic priorities provide a clear and compelling agenda for the heritage science and preservation community. By focusing on the critical challenges of climate change, embracing the potential of new technologies, and committing to a more equitable and inclusive approach, the this compound is paving the way for a more resilient and relevant future for historic preservation. Researchers, scientists, and preservation professionals are encouraged to align their work with these priorities to contribute to the ongoing evolution of this vital field.
References
- 1. procurementsciences.com [procurementsciences.com]
- 2. Advisory Council on Historic Preservation (this compound) | National Preservation Institute [npi.org]
- 3. Federal Register :: Agencies - Advisory Council on Historic Preservation [federalregister.gov]
- 4. Climate Resilience & Sustainability | Advisory Council on Historic Preservation [this compound.gov]
- 5. This compound.gov [this compound.gov]
Navigating the Past, Building the Future: A Scientist's Technical Guide to the National Historic Preservation Act
An In-depth Technical Guide on the Core Principles of the National Historic Preservation Act for Researchers, Scientists, and Drug Development Professionals
Introduction: Bridging Scientific Advancement and Historic Preservation
For researchers, scientists, and professionals in drug development, the focus is firmly on the future: pioneering new technologies, discovering life-saving therapies, and expanding the boundaries of human knowledge. However, the physical spaces where this innovation occurs—be it a university campus, a corporate research park, or a manufacturing facility—are often situated within a rich historical context. The National Historic Preservation Act (NHPA) of 1966 is a key piece of federal legislation that ensures this historical context is considered as we build for the future.[1][2] This guide provides a technical overview of the foundational principles of the NHPA, with a specific focus on its relevance to the scientific and research community.
The NHPA establishes a partnership between the federal government and state, tribal, and local governments to preserve the nation's historic and cultural resources.[2] For scientists and developers, the most critical component of the NHPA is Section 106.[3][4] This section requires federal agencies to consider the effects of their undertakings on historic properties. An "undertaking" is a broad term that includes any project, activity, or program funded, permitted, licensed, or approved by a federal agency. This means that if your research facility receives federal grants, requires a federal permit for construction or expansion, or is located on federal land, the Section 106 review process will likely apply.
This guide will demystify the Section 106 process, providing a clear "experimental protocol" for compliance, presenting key data on review timelines, and offering case studies relevant to scientific facilities. By understanding the principles of the NHPA, the scientific community can proactively integrate historic preservation into project planning, ensuring that the quest for future innovation respects and preserves the significant threads of our past.
Quantitative Data on the Section 106 Process
While a comprehensive, nationwide database on all Section 106 reviews is not publicly available, data from various sources provides insight into the typical timelines and outcomes of the process. It is important to note that these figures can vary significantly based on the complexity of the project, the state, and the specific consulting parties involved.
| Metric | Reported Figure(s) | Source / Notes |
| SHPO/THPO Response Time | 30 calendar days for review of a finding or determination upon receipt of adequate information. | This 30-day clock can reset if the SHPO/THPO requests additional information. |
| Average SHPO Review Time | Varies by state; for example, in FY2024, Kentucky reported an average turnaround of 11 days. | This demonstrates that many reviews are completed much faster than the statutory 30-day period. |
| Finding of "No Historic Properties Affected" | A 2010 study indicated that of 114,000 eligibility actions reviewed annually, approximately 85% were found not to involve historic properties. | This suggests that a large majority of projects undergoing Section 106 review do not have an impact on historic properties. |
| Finding of "Adverse Effect" | Varies by state; for example, in FY2025, Washington state reported that only 8% of over 5,000 reviews found adverse effects. | This indicates that a finding of "adverse effect" is not the most common outcome of a Section 106 review. |
| Timeline for Memorandum of Agreement (MOA) | Varies significantly depending on the complexity of the adverse effect and the number of consulting parties. | The process of resolving adverse effects through an MOA is consultative and does not have a fixed timeline. |
Experimental Protocol: The Section 106 Review Process
For a scientist or research professional, the Section 106 process can be viewed as a structured protocol with distinct phases and required documentation. Adhering to this protocol early in the project planning phase is the most effective way to ensure compliance and avoid delays.
Phase 1: Initiation and Identification
-
Determine if the Project is a Federal "Undertaking": The first step is to ascertain if the project falls under the purview of Section 106. This is triggered if the project is funded, licensed, or permitted, in whole or in part, by a federal agency.
-
Initiate Consultation: The federal agency (or the applicant on their behalf) must initiate consultation with the relevant State Historic Preservation Officer (SHPO) or Tribal Historic Preservation Officer (THPO). This involves notifying them of the undertaking and providing initial project information.
-
Define the Area of Potential Effects (APE): The APE is the geographic area within which an undertaking may directly or indirectly cause alterations in the character or use of historic properties. The APE should be defined in consultation with the SHPO/THPO.
-
Identify Historic Properties: A "good faith effort" must be made to identify historic properties within the APE. This involves:
-
Reviewing existing records, such as the National Register of Historic Places.
-
Conducting field surveys and investigations.
-
Consulting with the SHPO/THPO and other interested parties.
-
Phase 2: Assessment of Effects
-
Apply the Criteria of Adverse Effect: For each identified historic property, the federal agency must assess the potential effects of the undertaking by applying the Criteria of Adverse Effect (36 CFR § 800.5). An adverse effect occurs when an undertaking may alter, directly or indirectly, any of the characteristics of a historic property that qualify it for inclusion in the National Register in a manner that would diminish the integrity of the property's location, design, setting, materials, workmanship, feeling, or association.
-
Make a Finding of Effect: Based on the assessment, one of three findings must be made:
-
No Historic Properties Affected: This finding is appropriate when there are no historic properties in the APE, or the project will have no effect on any historic properties present.
-
No Adverse Effect: This finding is made when there are historic properties present, but the project's effects are not detrimental.
-
Adverse Effect: This finding is made when the project will have a detrimental impact on one or more historic properties.
-
-
Submit Findings to SHPO/THPO: The finding of effect, along with supporting documentation, is submitted to the SHPO/THPO for their review and concurrence. The SHPO/THPO generally has 30 days to respond.
Phase 3: Resolution of Adverse Effects
-
Continue Consultation: If a finding of "Adverse Effect" is made, the federal agency must continue to consult with the SHPO/THPO and other consulting parties to find ways to avoid, minimize, or mitigate the adverse effects.
-
Develop a Memorandum of Agreement (MOA): The consultation typically results in a legally binding Memorandum of Agreement (MOA) that outlines the agreed-upon measures to resolve the adverse effects.
-
Implement the MOA: The federal agency is responsible for ensuring that the stipulations of the MOA are carried out.
Mandatory Visualizations
The Section 106 Review Process Workflow
Caption: A flowchart illustrating the four-step Section 106 review process.
Criteria of Adverse Effect
Caption: The criteria for determining an adverse effect on a historic property.
Case Study: NASA's Glenn Research Center and the NHPA
A relevant case study for the scientific community is the experience of the National Aeronautics and Space Administration (NASA) with the NHPA. NASA's Glenn Research Center, established in 1941, is home to numerous historic facilities that were crucial to the advancement of aerospace technology. These include the Altitude Wind Tunnel, used in the development of early jet engines, and the Rocket Engine Test Facility, a National Historic Landmark.
As NASA's research needs evolved, many of these facilities required modification or were slated for decommissioning. These actions constituted federal undertakings and triggered the Section 106 process. For example, modifications to historic wind tunnels to accommodate new research programs required careful consideration of their impact on the character-defining features of these structures.
In cases where adverse effects could not be avoided, NASA entered into Memoranda of Agreement with the relevant SHPOs. These agreements stipulated mitigation measures, such as:
-
Documentation: Creating detailed architectural and engineering records of the historic facilities before their alteration or demolition.
-
Interpretive Displays: Installing historical markers and exhibits to educate the public about the historical significance of the facilities.
-
Salvage and Preservation: Carefully removing and preserving key artifacts and components of the historic facilities for future display or study.
The NASA experience demonstrates that the NHPA does not prohibit scientific advancement at historic sites. Instead, it provides a flexible framework for balancing the needs of cutting-edge research with the responsibility of preserving our nation's scientific heritage. This approach allows for the continued use and adaptation of historic research facilities while ensuring that their important contributions to science and technology are not forgotten. For drug development professionals, this case study offers a model for how to approach the expansion or modification of existing manufacturing plants or research campuses that may have historical significance. By proactively engaging in the Section 106 process, it is possible to achieve both research objectives and preservation goals.
Conclusion: A Framework for Responsible Innovation
The National Historic Preservation Act and its Section 106 review process are not impediments to scientific progress. Rather, they provide a structured and collaborative framework for ensuring that the development of new research facilities and the advancement of science and technology are undertaken with a conscious regard for our nation's history. For researchers, scientists, and drug development professionals, understanding the core principles of the NHPA is the first step toward successful project planning and execution. By initiating consultation early, making a good faith effort to identify and assess effects on historic properties, and collaborating with SHPOs and other stakeholders to resolve adverse effects, the scientific community can continue to build the future without erasing the invaluable legacy of the past.
References
The Digital Frontier of Preservation: A Technical Guide to the ACHP's Embrace of New Technologies
For Immediate Release
Washington, D.C. - The Advisory Council on Historic Preservation (ACHP) today released a comprehensive technical guide detailing its position on the integration of new technologies in the field of historic preservation. This document, aimed at researchers, scientists, and drug development professionals, outlines the agency's strategic approach to leveraging technological innovation to enhance the efficiency and effectiveness of the Section 106 review process and address contemporary challenges such as climate change.
The guide emphasizes the this compound's commitment to fostering a forward-thinking preservation landscape where digital tools and data-driven methodologies play a pivotal role. It highlights the agency's focus on improving access to reliable digital and geospatial information to inform federal project planning and streamline environmental reviews.[1]
Key Tenets of the this compound's Technological Stance:
The this compound's approach to new technologies is guided by a set of core principles aimed at modernizing the federal historic preservation process while upholding the tenets of the National Historic Preservation Act (NHPA).
-
Enhancing Data Accessibility and Management: A central pillar of the this compound's strategy is the improvement of digital information infrastructure. The agency's Digital Information Task Force has put forth recommendations to enhance the availability of geospatial data on historic properties, thereby facilitating more informed and efficient project planning and Section 106 reviews.[1] This initiative is further bolstered by recent funding to develop a centralized geolocation database of U.S. historic properties, which will accelerate permitting for thousands of federal undertakings annually.[2]
-
Streamlining Regulatory Processes: The this compound advocates for and utilizes program alternatives, such as programmatic agreements and program comments, to tailor and streamline the Section 106 review process, particularly for large-scale infrastructure projects like broadband and wireless telecommunications.[3][4] This pragmatic approach seeks to balance the need for technological advancement with the imperative of historic preservation.
-
Addressing Climate Change with Innovation: Recognizing the escalating threat of climate change to historic properties, the this compound has issued a policy statement calling for the accelerated development of guidance on acceptable treatments for at-risk resources. This includes the incorporation of the latest technological innovations and material treatments to enhance the resilience of historic buildings and sites.
-
Modernizing Preservation Standards: The this compound is actively engaged in a critical review of federal historic preservation standards to ensure they are responsive to modern challenges and opportunities. This includes a more flexible application of guidance to accommodate new technologies and sustainable practices in the rehabilitation of historic structures.
Data Presentation: A Move Towards Standardization
While specific quantitative data on the adoption rates of new technologies in preservation is not centrally compiled by the this compound, the agency strongly supports the development and use of standardized data for cultural resource management. The following table illustrates the types of data the this compound encourages federal and state agencies to collect and manage, drawing from the principles outlined in the National Cultural Resource Management Data Standard.
| Data Category | Key Attributes | Rationale for Collection and Standardization |
| Geospatial Data | Site/Property Boundaries, District Polygons, Location Coordinates | To create an integrated, nationwide map of historic and cultural resources for early planning and impact avoidance. |
| Investigation Data | Survey Areas, Dates of Investigation, Methods Used | To track where and when cultural resource investigations have occurred, reducing redundancy and improving efficiency. |
| Resource Status | National Register Eligibility, Condition Assessments | To provide up-to-date information for project reviews and long-term management of historic properties. |
| Digital Documentation | 3D Laser Scans, Photogrammetric Models, Digital Twins | To create high-fidelity records of historic properties for conservation, research, and public engagement. |
Methodologies and Workflows
The this compound's role is primarily that of policy and oversight; therefore, it does not prescribe detailed experimental protocols for specific preservation technologies. Instead, the agency provides guidance on the process for incorporating new technologies within the existing regulatory framework of Section 106.
The Section 106 Review Process: A Framework for Technological Integration
The four-step Section 106 review process provides a structured methodology for federal agencies to consider the effects of their undertakings on historic properties. New technologies can be integrated at various stages of this process:
-
Initiate the Process: Utilize GIS and digital databases to identify consulting parties and areas of potential effect.
-
Identify Historic Properties: Employ technologies such as LiDAR, ground-penetrating radar, and remote sensing to identify and delineate historic properties, including archaeological sites.
-
Assess Adverse Effects: Use 3D modeling and simulations to visualize the potential impacts of a project on the character-defining features of a historic property.
-
Resolve Adverse Effects: Develop mitigation measures that may include the use of new technologies, such as high-resolution digital documentation of a property to be altered or demolished.
The following diagram illustrates the logical workflow for integrating new technologies into the Section 106 process.
This compound's Digital Information Task Force Workflow
The this compound's Digital Information Task Force has established a logical workflow for improving the use of digital and geospatial data in preservation. This workflow emphasizes collaboration and feedback among key stakeholders.
Conclusion
The Advisory Council on Historic Preservation is actively fostering an environment where new technologies are thoughtfully and effectively integrated into the practice of historic preservation. By promoting robust data management, streamlining regulatory processes, and embracing innovative solutions to contemporary challenges, the this compound is ensuring that the nation's rich cultural heritage is preserved for future generations in an ever-evolving digital world. The agency will continue to provide guidance and support to federal agencies and preservation partners in harnessing the power of technology to achieve our common preservation goals.
References
- 1. Digital Information Task Force Recommendations and Action Plan | Advisory Council on Historic Preservation [this compound.gov]
- 2. This compound Receives $750,000 in Funding for Innovative Information Technology | Advisory Council on Historic Preservation [this compound.gov]
- 3. Broadband Infrastructure and Section 106 Review | Advisory Council on Historic Preservation [this compound.gov]
- 4. Telecommunications | Advisory Council on Historic Preservation [this compound.gov]
Navigating the Nexus of Research and Preservation: A Technical Guide to the ACHP Initial Consultation Process
For Researchers, Scientists, and Drug Development Professionals
Introduction
In the intricate landscape of scientific research and development, particularly for projects with a federal nexus, a critical yet often unfamiliar regulatory pathway emerges: the Section 106 consultation process, overseen by the Advisory Council on Historic Preservation (ACHP). While seemingly distant from the laboratory or clinical trial, this process is a vital component of project approval for any federally funded, licensed, or permitted research activity that has the potential to affect historic properties. This guide provides an in-depth technical overview of the initial consultation process with the this compound, tailored for a scientific audience. By drawing parallels with the structured, methodical approach of scientific inquiry, this document aims to demystify the process, enabling researchers and drug development professionals to navigate it effectively, ensuring both scientific advancement and the preservation of our nation's heritage.
The Section 106 review process is analogous to a preclinical safety assessment for a new therapeutic. Just as a preclinical study identifies potential adverse effects on a biological system, the Section 106 process identifies potential adverse effects on our historical and cultural landscape. Both processes are systematic, involve expert consultation, and aim to mitigate negative impacts before a project proceeds. Understanding this framework is crucial for efficient project planning and execution.
The Core of the Matter: The Section 106 Process
Section 106 of the National Historic Preservation Act of 1966 (NHPA) mandates that federal agencies consider the effects of their projects—referred to as "undertakings"—on historic properties.[1][2] The process is implemented through regulations issued by the this compound, an independent federal agency that promotes the preservation, enhancement, and sustainable use of the nation's diverse historic resources.[3] The goal of the Section 106 process is to identify and assess the effects of a proposed project on historic properties and to seek ways to avoid, minimize, or mitigate any adverse effects.[4]
What Constitutes a "Research Project" Undertaking?
For the purposes of Section 106, a "research project" is not limited to laboratory work. It encompasses any scientific investigation that is a federal undertaking and has the potential to affect historic properties.[5] This most commonly includes, but is not limited to:
-
Archaeological Surveys and Excavations: A significant portion of research projects subject to Section 106 are archaeological in nature, with estimates suggesting over 90% of archaeological excavations in the United States are conducted under this provision.
-
Environmental Impact Studies: Research involving ground disturbance, construction of monitoring stations, or other activities on federal lands.
-
Infrastructure for Research: The construction or modification of research facilities, access roads, or other infrastructure that may be located in or near historic sites.
Quantitative Overview of the Section 106 Consultation Process
While comprehensive, granular data on all Section 106 consultations is not centralized in a single public repository, available information provides insight into the process's timelines and outcomes.
| Metric | Value/Range | Source/Comment |
| Annual Federal Undertakings Reviewed | ~100,000 - 120,000 | This figure represents the approximate number of federal projects reviewed by State and Tribal Historic Preservation Officers each year. |
| Projects with Adverse Effects | Very small percentage | The vast majority of federal projects are found to have no adverse effect on historic properties. |
| SHPO/THPO Review Period for Findings | 30 calendar days | State Historic Preservation Officers (SHPOs) and Tribal Historic Preservation Officers (THPOs) have a standard 30-day window to review and comment on a federal agency's findings and determinations. |
| Average Time to Finalize a Section 106 Agreement | Increased by 90 days (a 20% increase) in two years | A study of Section 106 agreements (which are typically required for projects with adverse effects) indicates a trend of increasing timelines for resolution. |
| Shortest Average Timescale for Agreement | 192 days | This highlights that even in the best-case scenarios, resolving adverse effects can be a lengthy process. |
| Maximum Recorded Timescale for Agreement | 2,679 days (> 7 years) | Illustrates the potential for significant delays in complex or contentious projects. |
Experimental Protocols: Methodologies in Cultural Resource Investigation
The "experimental protocols" in the context of Section 106 consultation for research projects are the systematic methodologies employed to identify, evaluate, and mitigate effects on historic properties. These are most clearly defined in the realm of archaeological and cultural resource surveys.
Phase I: Identification Survey
-
Objective: To determine the presence or absence of historic properties within the project's Area of Potential Effects (APE).
-
Methodology:
-
Literature Review: A comprehensive review of existing records, including historical maps, previous survey reports, and state archaeological site files.
-
Systematic Field Survey: This typically involves a pedestrian survey where the ground surface is systematically walked and visually inspected for artifacts or features. In areas with low surface visibility, subsurface testing is conducted through the excavation of shovel tests at regular intervals (e.g., 30-meter intervals in high-probability areas). All excavated soil is screened through a 1/4-inch mesh to recover artifacts.
-
Deep Testing: In areas with the potential for buried archaeological sites, techniques such as augering, coring, or mechanical trenching may be employed.
-
Phase II: Evaluation
-
Objective: To determine the significance of a located property and its eligibility for the National Register of Historic Places.
-
Methodology:
-
Intensive Testing: This may involve the excavation of additional, more closely spaced shovel tests or larger test units to define the boundaries of the site and to understand its structure and integrity.
-
Artifact Analysis: Detailed analysis of recovered artifacts to determine the age, function, and cultural affiliation of the site.
-
Feature Documentation: The mapping and documentation of any identified archaeological features, such as hearths or building foundations.
-
Phase III: Mitigation/Data Recovery
-
Objective: To mitigate the adverse effects of the project on a significant historic property, often through the systematic recovery of important information.
-
Methodology:
-
Development of a Research Design and Data Recovery Plan: This plan outlines the specific research questions to be addressed and the methods for excavation and analysis.
-
Large-Scale Excavation: The systematic excavation of large areas of the site to recover a representative sample of artifacts and features.
-
Specialized Analyses: This may include radiocarbon dating, soil analysis, and other specialized studies to extract as much information as possible from the archaeological record.
-
Reporting and Curation: The preparation of a detailed technical report on the findings and the curation of all recovered artifacts and records in a recognized repository.
-
Visualizing the Process: Workflows and Pathways
To further clarify the initial consultation process, the following diagrams illustrate the key workflows and logical relationships using the DOT language.
Caption: The initial steps taken by a federal agency to begin the Section 106 consultation process.
Caption: The workflow for identifying and evaluating historic properties within the project area.
Caption: The communication and consultation pathways between the key participants in the Section 106 process.
Conclusion
The initial consultation process with the this compound under Section 106 is a structured, multi-step process that is integral to responsible project planning for any research undertaking with federal involvement. For researchers, scientists, and drug development professionals, understanding this process is not an ancillary task but a core component of project management. By appreciating the parallels between the systematic methodologies of scientific research and the procedural requirements of Section 106, the scientific community can effectively engage in this process, fostering a collaborative environment where scientific progress and historic preservation can coexist and mutually inform one another. Early and proactive consultation is paramount to avoiding delays and ensuring successful project outcomes.
References
- 1. Section 106: National Historic Preservation Act of 1966 | GSA [gsa.gov]
- 2. Section 106 Review Fact Sheet | Advisory Council on Historic Preservation [this compound.gov]
- 3. epa.gov [epa.gov]
- 4. Section 106 Tutorial: Roles and Responsibilities - Consulting Parties [environment.fhwa.dot.gov]
- 5. 30-Day Review Timeframes: When are They Applicable in Section 106 Review? | Advisory Council on Historic Preservation [this compound.gov]
A Technical Guide to the Scientific Analysis of Historic Properties
For Researchers, Scientists, and Drug Development Professionals
This guide provides a comprehensive overview of the definition of "historic property" as established by the Advisory Council on Historic Preservation (ACHP) and delves into the scientific methodologies employed for the analysis of materials from such properties. The content is structured to offer researchers, scientists, and professionals in drug development a foundational understanding of the interdisciplinary nature of cultural heritage science, detailing experimental protocols, data presentation, and logical workflows.
Defining a "Historic Property"
The Advisory Council on Historic Preservation (this compound), an independent federal agency, oversees the implementation of Section 106 of the National Historic Preservation Act (NHPA).[1][2] This legislation requires federal agencies to consider the effects of their projects on historic properties.[3][4] The formal definition of a "historic property" is codified in federal regulations at 36 CFR § 800.16(l)(1).[5]
A historic property is defined as any prehistoric or historic district, site, building, structure, or object that is included in, or eligible for inclusion in, the National Register of Historic Places. This designation is significant as it extends protection to properties that have not yet been formally listed but meet the criteria for inclusion. The term also encompasses artifacts, records, and archaeological remains associated with such properties. Furthermore, it includes properties of traditional religious and cultural importance to Native American tribes or Native Hawaiian organizations that satisfy the National Register criteria.
Properties are evaluated for the National Register based on four main criteria:
-
Criterion A: Association with events that have made a significant contribution to the broad patterns of our history.
-
Criterion B: Association with the lives of persons significant in our past.
-
Criterion C: Embodiment of the distinctive characteristics of a type, period, or method of construction, or that represent the work of a master, or that possess high artistic values.
-
Criterion D: Have yielded, or may be likely to yield, information important in prehistory or history.
Methodologies for Scientific Analysis
The scientific analysis of materials from historic properties is a multidisciplinary field that draws from chemistry, physics, geology, and biology to characterize the composition, provenance, and degradation of cultural heritage materials. These investigations provide invaluable insights for conservation, historical interpretation, and authentication.
Elemental Analysis
Elemental analysis techniques are fundamental in determining the constituent elements of inorganic materials such as metals, glass, ceramics, and pigments.
XRF is a non-destructive technique that bombards a sample with X-rays, causing the elements within the sample to emit fluorescent (or secondary) X-rays at characteristic energies. By measuring these energies, the elemental composition can be determined. Portable XRF (pXRF) instruments are frequently used for in-situ analysis.
Experimental Protocol for XRF Analysis of Archaeological Metals:
-
Instrument Calibration: Calibrate the XRF spectrometer using certified reference materials that are matrix-matched to the artifacts being analyzed (e.g., bronze, silver alloys).
-
Surface Preparation: Gently clean the surface of the metal artifact to remove any superficial dirt or corrosion that may interfere with the analysis. This should be done with care to not damage the original surface. For quantitative analysis, a small, flat area is ideal.
-
Data Acquisition: Position the XRF instrument's measurement window directly on the prepared surface of the artifact. Ensure a consistent distance and angle for all measurements.
-
Measurement Parameters: Set the appropriate analytical parameters on the instrument, including the voltage, current, and acquisition time. Typical acquisition times range from 30 to 120 seconds per measurement point.
-
Data Analysis: Process the resulting spectra using the instrument's software. This will involve peak identification and quantification to determine the elemental concentrations.
SEM-EDS provides high-resolution imaging of a sample's surface and localized elemental analysis. A focused beam of electrons is scanned across the sample, generating various signals, including secondary electrons for imaging and characteristic X-rays for elemental analysis.
Experimental Protocol for SEM-EDS Analysis of Historic Pigments:
-
Sample Preparation: A minute sample of the pigment is carefully removed from the historic object using a scalpel under a microscope. The sample is then mounted on an aluminum stub using a carbon adhesive tab and sputter-coated with a thin layer of carbon to make it conductive.
-
Instrument Setup: The prepared sample is placed into the SEM chamber, and a vacuum is created. The electron beam is generated and focused on the sample.
-
Imaging: Secondary electron or backscattered electron detectors are used to obtain high-magnification images of the pigment particles, revealing their morphology and texture.
-
EDS Analysis: The electron beam is focused on a specific point of interest on the pigment particle, or an area is mapped. The emitted X-rays are collected by the EDS detector.
-
Spectral Analysis: The EDS software generates a spectrum showing the characteristic X-ray peaks of the elements present. This allows for the qualitative and semi-quantitative determination of the elemental composition of the pigment.
Molecular and Structural Analysis
These techniques are crucial for identifying the molecular composition and crystalline structure of both organic and inorganic materials.
FTIR spectroscopy identifies chemical bonds in a molecule by producing an infrared absorption spectrum. It is particularly useful for characterizing organic materials like textiles, binding media, and resins, as well as some inorganic compounds.
Experimental Protocol for FTIR Analysis of Historical Textiles:
-
Sample Preparation: A small fiber sample is carefully removed from the textile. For Attenuated Total Reflectance (ATR)-FTIR, the fiber can be placed directly on the ATR crystal.
-
Data Acquisition: The sample is placed in the FTIR spectrometer. An infrared beam is passed through the sample, and the instrument measures how much of the infrared radiation is absorbed at each wavelength.
-
Spectral Collection: The spectrum is typically collected over a range of 4000 to 400 cm⁻¹. Multiple scans are often averaged to improve the signal-to-noise ratio.
-
Data Analysis: The resulting spectrum is a plot of absorbance or transmittance versus wavenumber. The peaks in the spectrum correspond to the vibrational frequencies of the chemical bonds in the sample, allowing for the identification of the material (e.g., cellulose for cotton or linen, protein for wool or silk).
Organic Residue Analysis
The analysis of organic residues preserved in or on artifacts can provide direct evidence of past human activities, such as diet, food preparation techniques, and the use of various natural products.
GC-MS is a powerful technique for separating, identifying, and quantifying complex mixtures of volatile organic compounds. It is widely used for the analysis of lipids, waxes, and resins from archaeological contexts.
Experimental Protocol for GC-MS Analysis of Organic Residues in Ceramics:
-
Sample Preparation: A small fragment of the ceramic sherd is ground into a fine powder.
-
Lipid Extraction: The powdered ceramic is subjected to solvent extraction (e.g., using a mixture of chloroform and methanol) to dissolve the absorbed organic residues.
-
Derivatization: The extracted lipids are chemically modified (derivatized) to make them more volatile and suitable for GC analysis. A common method is transesterification to form fatty acid methyl esters (FAMEs).
-
GC-MS Analysis: The derivatized extract is injected into the gas chromatograph. The different compounds in the mixture are separated based on their boiling points and interaction with the stationary phase of the GC column. As each compound elutes from the column, it enters the mass spectrometer, which ionizes the molecules and separates the ions based on their mass-to-charge ratio, providing a unique mass spectrum for each compound.
-
Data Interpretation: The resulting chromatogram shows a series of peaks, each representing a different compound. By comparing the retention times and mass spectra of the peaks to those of known standards and library databases, the organic compounds present in the residue can be identified.
Quantitative Data Presentation
The presentation of quantitative data in a structured format is essential for the comparison and interpretation of analytical results. The following tables provide examples of how elemental composition data from the analysis of historic artifacts can be presented.
Table 1: Elemental Composition of Roman Coins by SEM-EDS (wt%)
| Coin ID | Cu | Pb | Sn | Fe | Other |
| RC-01 | 92.13 | - | 2.26 | - | Ag: 5.61 |
| RC-02 | 97.25 | - | - | - | Ag: 2.75 |
| RC-03 | 85.2 | 10.5 | 3.1 | 1.2 | - |
| RC-04 | 98.12 | 1.11 | - | - | - |
| RC-05 | 65.0 | 11.59 | 12.0 | - | - |
Data compiled from multiple sources.
Table 2: Chemical Composition of Roman Coins by Electron Microprobe Analysis (EMPA) (mass %)
| Coin ID | Cu (wt.%) | Sn (wt.%) | Pb (wt.%) | Fe (wt.%) | Zn (wt.%) |
| Augustus As (#4) | 96.5 - 99 | < 1 | < 1 | < 0.5 | < 0.5 |
| Quadrans of Caligula (#6) | 97 - 99 | < 1 | < 1 | < 0.5 | < 0.5 |
| Nummus radians of Galerius Caesar (#10) | 96.5 - 99 | < 1 | < 1 | < 0.5 | < 0.5 |
| Aes Litra (#1) | 0 - 35 | 52 - 94 | < 2 | < 1 | - |
| Claudius As (#5) | 70 - 85 | < 2 | 10 - 25 | < 1 | < 1 |
Data adapted from a study on Roman coins.
Visualization of Workflows and Relationships
Diagrams are essential for visualizing complex processes and relationships in the scientific analysis of historic properties. The following diagrams, created using the DOT language, illustrate a general workflow and a more specific decision-making process.
References
- 1. researchgate.net [researchgate.net]
- 2. researchgate.net [researchgate.net]
- 3. inis.iaea.org [inis.iaea.org]
- 4. Different Analytical Procedures for the Study of Organic Residues in Archeological Ceramic Samples with the Use of Gas Chromatography-mass Spectrometry - PubMed [pubmed.ncbi.nlm.nih.gov]
- 5. researchgate.net [researchgate.net]
Methodological & Application
Application Notes and Protocols for Remote Sensing in ACHP Section 106 Surveys
Audience: Researchers and scientists in archaeology, historic preservation, and cultural resource management.
Introduction: Section 106 of the National Historic Preservation Act of 1966 (NHPA) requires federal agencies to consider the effects of their projects on historic properties. This process involves a series of steps, including identifying historic properties and assessing the effects of the undertaking. Remote sensing technologies offer a powerful, non-invasive toolkit for conducting Section 106 surveys, enabling large-scale analysis, enhancing discovery of archaeological sites and historic features, and providing valuable data for decision-making. These application notes and protocols provide a detailed guide to integrating various remote sensing techniques into the Section 106 review process.
Application Notes
Integrating Remote Sensing into the Four-Step Section 106 Process
Remote sensing can be effectively applied at each stage of the Section 106 process:
-
Step 1: Initiate the Section 106 Process: While remote sensing is not directly used in the initial administrative steps, the data it can provide should be considered during the planning phase to scope the potential need for surveys.
-
Step 2: Identify Historic Properties: This is where remote sensing is most impactful. It aids in defining the Area of Potential Effects (APE) and in the identification of previously unknown archaeological sites and historic structures.[1][2]
-
Step 3: Assess Adverse Effects: Remote sensing can be used to monitor changes to historic properties over time and to assess the potential visual and physical impacts of a project.
-
Step 4: Resolve Adverse Effects: Data from remote sensing can inform the development of mitigation measures, such as project redesign to avoid sensitive areas.
Defining the Area of Potential Effects (APE) with Remote Sensing
The APE is the geographic area within which an undertaking may directly or indirectly cause alterations in the character or use of historic properties.[1] Remote sensing helps in defining a more accurate and comprehensive APE by:
-
Landscape-Level Analysis: Satellite imagery and LiDAR can be used to analyze broad landscapes to understand the geomorphological and environmental context of potential historic properties.[3]
-
Viewshed Analysis: For projects with the potential for visual effects, LiDAR-derived Digital Elevation Models (DEMs) can be used to conduct viewshed analysis to determine the area from which the project will be visible.
-
Predictive Modeling: By combining remote sensing data with other geographic information in a GIS, predictive models can be developed to identify areas with a high potential for containing archaeological sites, thus helping to refine the APE.
Identification of Historic Properties
A variety of remote sensing techniques can be employed to identify potential historic properties:
-
Aerial and Satellite Imagery: High-resolution satellite imagery can reveal features such as ancient roads, agricultural fields, and even the outlines of buried structures through soil marks and vegetation anomalies.[4] The analysis of multi-temporal imagery can show changes in the landscape that may indicate the presence of historic properties.
-
Light Detection and Ranging (LiDAR): LiDAR is particularly effective in forested areas, where it can "see" through the tree canopy to create high-resolution models of the ground surface, revealing subtle earthworks, mounds, and other archaeological features that are not visible in traditional aerial photography.
-
Geophysical Surveys: Techniques such as Ground Penetrating Radar (GPR), magnetometry, and electrical resistivity are used to investigate subsurface features without excavation. These methods can detect buried walls, foundations, pits, and other archaeological remains.
Data Presentation and Management
All data collected through remote sensing should be managed within a Geographic Information System (GIS). This allows for the integration of different datasets, spatial analysis, and the creation of maps and models to support the Section 106 review process.
Quantitative Data Summary
The following tables provide a summary of quantitative data for various remote sensing techniques applicable to Section 106 surveys.
Table 1: Comparison of Common Satellite Sensors for Archaeological Prospection
| Satellite/Sensor | Spatial Resolution (Panchromatic) | Spatial Resolution (Multispectral) | Revisit Time | Cost | Key Applications in Section 106 |
| Pleiades Neo | 30 cm | 1.2 m | Daily | High | Detailed site identification, feature mapping, monitoring. |
| WorldView-3 | 31 cm | 1.24 m | <1 day | High | High-resolution site discovery, vegetation analysis for crop marks. |
| IKONOS | 82 cm | 3.2 m | 1-3 days | Moderate to High | Regional surveys, identification of larger archaeological features. |
| QuickBird | 61 cm | 2.4 m | 1-3.5 days | Moderate to High | Detailed site mapping and analysis. |
| Sentinel-2 | 10 m (some bands) | 10 m, 20 m, 60 m | 5 days | Free | Large-scale landscape analysis, monitoring environmental changes around sites. |
| Landsat 8/9 | 15 m | 30 m | 16 days | Free | Regional landscape characterization, long-term environmental monitoring. |
| CORONA | ~1.8 m | N/A (B&W Film) | Historical | Low (declassified) | Historical landscape analysis, identifying sites disturbed by modern activity. |
| COSMO-SkyMed (SAR) | 1 m (Spotlight mode) | N/A | Variable | Moderate to High | Detection of subsurface features, imaging through cloud cover. |
Table 2: Typical Geophysical Properties of Archaeological Features and Surrounding Soils
| Material/Feature | Magnetic Susceptibility (10⁻⁸ SI/kg) | Electrical Resistivity (ohm-m) | GPR Reflection Potential |
| Topsoil | 20-100 | 50-200 | Variable |
| Subsoil | 5-30 | 100-500 | Variable |
| Fired Clay (hearth, kiln) | 100-2000 | 100-1000 | Moderate to High |
| Ditch/Pit Fill (organic) | 50-200 | 10-100 | High |
| Stone Foundation/Wall | Low | 500-10,000+ | High |
| Buried Metal Objects | Very High | Very Low (<1) | Very High |
| Compacted Earth (floor) | Slightly higher than surrounding soil | Higher than surrounding soil | Moderate |
Note: These values are approximate and can vary significantly depending on soil type, moisture content, and other environmental factors.
Experimental Protocols
Protocol 1: Archaeological Survey using Airborne LiDAR
1. Objective: To identify and map potential archaeological earthworks and other topographic features within the Area of Potential Effects (APE).
2. Methodology:
-
2.1. Data Acquisition:
-
Specify a high point density for the LiDAR survey (e.g., >8 points per square meter) to ensure adequate resolution for detecting subtle archaeological features.
-
Plan the flight mission for "leaf-off" conditions (late fall to early spring) in deciduous forest areas to maximize ground penetration.
-
Ensure the collection of both first and last return data.
-
-
2.2. Data Processing:
-
2.2.1. Point Cloud Classification: Classify the raw LiDAR point cloud data to separate ground points from vegetation and buildings. This is a critical step for creating a "bare-earth" Digital Elevation Model (DEM).
-
2.2.2. DEM Generation: Create a high-resolution DEM (e.g., 0.5 to 1-meter resolution) from the classified ground points.
-
-
2.3. Data Visualization and Analysis:
-
Generate various visualizations from the DEM to enhance the visibility of archaeological features. Common techniques include:
-
Hillshade: Simulates the sun's illumination of the terrain from different angles.
-
Slope: Highlights changes in terrain steepness.
-
Local Relief Model (LRM): Removes large-scale topographic trends to emphasize small-scale features.
-
-
Systematically review the visualizations to identify potential archaeological features such as mounds, earthworks, and old roadbeds.
-
-
2.4. Ground-Truthing:
-
Conduct field verification of identified anomalies to confirm their archaeological nature.
-
Protocol 2: Ground Penetrating Radar (GPR) Survey for Site-Specific Investigation
1. Objective: To detect and map subsurface archaeological features within a specific area of interest identified through other survey methods or historical documentation.
2. Methodology:
-
2.1. Site Preparation:
-
Establish a georeferenced survey grid over the area of interest. The grid size will depend on the expected size and density of features.
-
Clear the survey area of any surface obstructions that may interfere with the GPR antenna.
-
-
2.2. GPR Data Acquisition:
-
Select an appropriate antenna frequency based on the expected depth and size of the target features. Higher frequencies provide higher resolution but less depth penetration.
-
Collect data in a series of parallel transects across the grid. A close transect interval (e.g., 25-50 cm) is crucial for high-resolution 3D imaging.
-
Maintain a consistent survey speed and antenna contact with the ground.
-
-
2.3. Data Processing:
-
Apply necessary processing steps to the raw GPR data, which may include:
-
Dewow filtering: To remove low-frequency noise.
-
Gain adjustments: To amplify weaker signals from deeper reflectors.
-
Migration: To correctly position dipping reflectors.
-
-
-
2.4. Data Interpretation and Visualization:
-
Analyze the processed GPR profiles to identify hyperbolic reflections and other anomalies that may indicate buried features.
-
Generate "time-slices" or "depth-slices" by combining the data from all transects. These horizontal maps at different depths can reveal the plan view of buried structures.
-
-
2.5. Reporting:
-
Produce a report with maps showing the location and depth of interpreted archaeological features.
-
Mandatory Visualizations
Signaling Pathways and Workflows
Caption: Integration of Remote Sensing into the ACHP Section 106 Workflow.
Caption: Workflow for Archaeological LiDAR Data Processing and Analysis.
Caption: Protocol for a Ground Penetrating Radar Survey in a Section 106 Context.
References
Integrating Climate Models with AHP Assessments: Application Notes and Protocols
For Researchers, Scientists, and Drug Development Professionals
These application notes provide a detailed framework and protocol for integrating outputs from climate change models into Analytical Hierarchy Process (AHP) assessments. This methodology facilitates robust, transparent, and multi-criteria decision-making for climate change adaptation and mitigation strategies.
Introduction to AHP in Climate Change Assessments
The Analytical Hierarchy Process (AHP) is a multi-criteria decision-making method that helps structure complex problems and evaluate various alternatives against a set of criteria.[1][2] In the context of climate change, AHP is particularly useful for prioritizing adaptation and mitigation options due to the multifaceted nature of climate impacts, involving ecological, social, and economic factors.[1][2][3] The process involves decomposing a decision problem into a hierarchy, making pairwise comparisons of elements at each level, and synthesizing these judgments to determine overall priorities.
Core Concepts in Integrating Climate Models with AHP
The integration of climate model outputs with AHP assessments provides a scientifically grounded basis for decision-making. Climate models can offer projections on various parameters such as temperature increase, sea-level rise, precipitation patterns, and the frequency of extreme weather events. These projections can be used to inform the criteria and alternatives within the AHP framework.
A key challenge lies in translating the quantitative outputs of climate models into the qualitative and quantitative judgments required for AHP's pairwise comparisons. This involves defining clear indicators and thresholds to assess the impact of climate change on different aspects of a system.
Detailed Protocol for Integration
This protocol outlines a step-by-step methodology for integrating climate change model outputs with AHP assessments.
Step 1: Define the Decision Problem and Goal
The initial step is to clearly articulate the decision problem and the overall goal. For example, the goal could be to "select the most effective climate change adaptation strategy for a coastal community" or to "prioritize research and development investments in climate-resilient agriculture."
Step 2: Structure the Hierarchy
Decompose the decision problem into a hierarchical structure, typically consisting of:
-
Goal: The ultimate objective of the decision.
-
Criteria: The factors that will be used to evaluate the alternatives. These should be informed by climate model outputs.
-
Sub-criteria (optional): More specific factors under each criterion.
-
Alternatives: The different options being considered.
Example Hierarchy for Coastal Adaptation Strategy Selection:
-
Goal: Select the most effective coastal adaptation strategy.
-
Criteria:
-
Environmental Impact (informed by climate models projecting ecosystem changes)
-
Socio-economic Feasibility (informed by climate models projecting impacts on local economies)
-
Effectiveness in Risk Reduction (informed by climate models projecting the severity of climate hazards)
-
Long-term Sustainability (informed by long-range climate model projections)
-
-
Alternatives:
-
Mangrove Restoration
-
Zoning and Building Codes
-
Seawall Construction
-
Coral Reef Protection
-
Relocation Programs
-
Step 3: Data Collection and Integration of Climate Model Outputs
Gather relevant data, including outputs from climate models. This may involve downscaling global climate models to a local or regional scale to obtain more relevant projections.
Protocol for Integrating Climate Model Data:
-
Identify Key Climate Variables: Based on the defined criteria, identify the most relevant climate variables from model outputs (e.g., sea-level rise projections, storm surge frequency, temperature anomalies).
-
Define Impact Thresholds: Establish thresholds for these variables to define different levels of impact (e.g., "high," "medium," "low" sea-level rise).
-
Develop Scoring Metrics: Create a scoring system to translate the climate model outputs into a format suitable for AHP. For example, a 1-5 scale where 1 represents minimal impact and 5 represents severe impact.
Step 4: Pairwise Comparisons
This is the core of the AHP methodology. Experts, stakeholders, or researchers conduct pairwise comparisons of the elements at each level of the hierarchy. The comparisons are made using Saaty's 1-9 scale, where 1 indicates equal importance and 9 indicates extreme importance of one element over another.
Experimental Protocol for Pairwise Comparisons:
-
Form an Expert Panel: Assemble a group of experts with knowledge in climate science, local environmental conditions, and socio-economics.
-
Develop Comparison Questionnaires: Create questionnaires that guide the experts through the pairwise comparisons for each level of the hierarchy.
-
Conduct the Comparisons:
-
Criteria Comparison: Compare the relative importance of the criteria with respect to the overall goal.
-
Alternative Comparison: For each criterion, compare the performance of the alternatives.
-
-
Construct Pairwise Comparison Matrices: The results of the comparisons are used to construct matrices for each set of comparisons.
Step 5: Priority Vector Calculation and Consistency Check
For each pairwise comparison matrix, a priority vector (eigenvector) is calculated to determine the relative weights of the elements. The consistency of the judgments is then checked by calculating the Consistency Ratio (CR). A CR of 0.10 or less is generally considered acceptable.
Protocol for Calculation and Consistency Check:
-
Normalize the Pairwise Comparison Matrix: Divide each element in the matrix by the sum of its column.
-
Calculate the Priority Vector: The priority vector is the average of each row of the normalized matrix.
-
Calculate the Principal Eigenvalue (λmax): Multiply the pairwise comparison matrix by the priority vector. Then, divide each element of the resulting vector by the corresponding element of the priority vector. The average of these values is λmax.
-
Calculate the Consistency Index (CI): CI = (λmax - n) / (n - 1), where 'n' is the number of elements being compared.
-
Calculate the Consistency Ratio (CR): CR = CI / RI, where RI is the Random Index, which is a standard value based on the size of the matrix.
Step 6: Synthesize Priorities and Make a Decision
The final step is to aggregate the priorities from all levels of the hierarchy to obtain a final score for each alternative. The alternative with the highest score is considered the most suitable choice based on the established criteria and judgments.
Quantitative Data Presentation
The following tables provide examples of how quantitative data from an AHP assessment for prioritizing climate change mitigation strategies can be presented.
Table 1: Pairwise Comparison Matrix for Criteria
| Criteria | Environmental Impact | Socio-economic Feasibility | Effectiveness | Long-term Sustainability | Priority Vector |
| Environmental Impact | 1 | 3 | 2 | 4 | 0.45 |
| Socio-economic Feasibility | 1/3 | 1 | 1/2 | 2 | 0.17 |
| Effectiveness | 1/2 | 2 | 1 | 3 | 0.28 |
| Long-term Sustainability | 1/4 | 1/2 | 1/3 | 1 | 0.10 |
| Consistency Ratio (CR) | 0.08 |
Table 2: Final Scores and Ranking of Mitigation Strategies
| Mitigation Strategy | Environmental Impact (0.45) | Socio-economic Feasibility (0.17) | Effectiveness (0.28) | Long-term Sustainability (0.10) | Final Score | Rank |
| Mangrove Restoration | 0.40 | 0.25 | 0.35 | 0.40 | 0.3655 | 1 |
| Zoning & Building Codes | 0.15 | 0.35 | 0.25 | 0.20 | 0.217 | 3 |
| Seawall Construction | 0.10 | 0.15 | 0.20 | 0.10 | 0.1365 | 4 |
| Coral Reef Protection | 0.30 | 0.10 | 0.15 | 0.25 | 0.2195 | 2 |
| Relocation Programs | 0.05 | 0.15 | 0.05 | 0.05 | 0.067 | 5 |
Note: The values in Table 2 under each criterion are the priority vectors derived from the pairwise comparisons of the alternatives with respect to that criterion.
Visualizations
The following diagrams, generated using Graphviz (DOT language), illustrate key workflows and relationships in the integration of climate models with AHP assessments.
Caption: Workflow for integrating climate model outputs into the AHP framework.
Caption: Hierarchical structure of the AHP for climate adaptation decision-making.
Caption: Logical workflow for calculating priorities and checking consistency in AHP.
References
Application Notes and Protocols for Archaeological Data Recovery Under ACHP Guidelines
This document provides detailed application notes and protocols for conducting archaeological data recovery in accordance with the guidelines established by the Advisory Council on Historic Preservation (ACHP). These procedures are primarily situated within the framework of Section 106 of the National Historic Preservation Act (NHPA), which requires federal agencies to consider the effects of their projects on historic properties.[1] Archaeological data recovery is a common mitigation measure used when adverse effects on a significant archaeological site are unavoidable.[1][2][3] It is a carefully planned and executed process to retrieve significant information from a site before it is damaged or destroyed.[1]
Application Note 1: The Section 106 Process and the Role of Data Recovery
Section 106 of the NHPA mandates that federal agencies identify and assess the effects of their undertakings on historic properties. The process involves consultation with State Historic Preservation Officers (SHPOs), Tribal Historic Preservation Officers (THPOs), and other interested parties. Data recovery is not an automatic outcome; it is chosen as a mitigation strategy when a historic property, specifically an archaeological site eligible for the National Register of Historic Places, cannot be preserved in place. The goal of data recovery is to preserve the important information that makes a site significant.
The process leading to a data recovery decision involves several key steps, beginning with the initiation of a project by a federal agency and proceeding through consultation to determine if adverse effects can be avoided, minimized, or mitigated.
Application Note 2: The Data Recovery Plan
Once data recovery is determined to be the appropriate mitigation measure, a comprehensive Data Recovery Plan must be developed. This plan serves as the guiding research design and operational blueprint for all subsequent fieldwork and laboratory analysis. The plan should be developed in consultation with the SHPO/THPO and other consulting parties and is often included as an appendix in a Memorandum of Agreement (MOA). A responsible plan should be grounded in regional, state, or local historic preservation plans and address specific research questions.
Key Components of a Data Recovery Plan:
-
Research Design: Clearly defined research questions and objectives tailored to the specific site. This links the data to be recovered with broader anthropological and historical research themes.
-
Field Methods: A detailed description of the excavation and documentation methods to be employed. This includes mapping strategies, excavation unit placement, and sampling procedures.
-
Laboratory Analysis Plan: A plan outlining the methods for processing, cataloging, and analyzing recovered artifacts and samples.
-
Curation: Arrangements for the long-term curation of artifacts, records, and other materials at a suitable repository.
-
Reporting and Dissemination: A commitment to produce a comprehensive technical report meeting professional standards, such as the Department of the Interior's Format Standards for Final Reports of Data Recovery Programs. The plan should also include provisions for public outreach and dissemination of results.
-
Professional Qualifications: Assurance that the work will be supervised by personnel meeting the Secretary of the Interior’s Professional Qualifications Standards.
Protocol 1: Fieldwork Methodologies
Fieldwork is a destructive process; therefore, meticulous documentation is essential to preserve the archaeological context of all recovered data. All field methods must be documented, including journals, forms, sketches, and photographs.
Step 1: Site Preparation and Mapping
-
Establish a Site Grid: A precise grid system is established over the excavation area using a total station or similar surveying equipment. This grid is crucial for maintaining horizontal control and accurately mapping the location of all artifacts and features.
-
Surface Collection and Documentation: A systematic pedestrian survey and collection of surface artifacts is conducted. The ground surface is photographed and mapped.
-
Geophysical Survey (Optional): Non-invasive techniques like ground-penetrating radar or magnetometry may be used to identify subsurface features before excavation.
Step 2: Excavation
-
Mechanical Stripping: In areas where the upper soil layers have been disturbed (e.g., by plowing), heavy machinery may be used under the constant supervision of an archaeologist to remove the disturbed topsoil and expose intact archaeological features.
-
Hand Excavation:
-
Horizontal Excavation: Involves opening large areas to understand the spatial arrangement of a site at a specific point in time.
-
Vertical Excavation: Focuses on digging deep, often in smaller units, to reveal a chronological sequence of soil layers (stratigraphy).
-
-
Feature Excavation: Archaeological features (e.g., pits, postholes, hearths) are typically excavated in sections (halves or quadrants) to expose and document their vertical profile before the remainder is removed.
Step 3: Data Collection and Recovery
-
Screening: All excavated soil is screened through wire mesh (typically ¼-inch or ⅛-inch) to ensure the recovery of small artifacts.
-
Sampling: Soil samples are collected from feature fill and stratigraphic layers for specialized analyses like flotation (to recover plant remains) and chemical analysis. Samples for radiocarbon dating (e.g., charcoal) are collected carefully to avoid contamination.
-
Documentation: All aspects of the excavation are thoroughly documented through standardized forms, detailed notes, scaled drawings of plan views and profiles, and high-resolution digital photography.
Protocol 2: Laboratory Methodologies
The analysis of artifacts and samples recovered during fieldwork is where much of the detailed interpretation occurs. Laboratory protocols ensure that all materials are properly cleaned, cataloged, analyzed, and prepared for curation.
Step 1: Initial Processing
-
Washing and Cleaning: Artifacts are carefully cleaned to remove dirt. The method depends on the material; for example, delicate materials may be dry-brushed, while durable items like stone tools are washed with water.
-
Stabilization: Some artifacts may require conservation treatment to prevent deterioration after being removed from the ground.
-
Sorting: Artifacts are sorted into basic material categories (e.g., lithic, ceramic, bone, metal).
Step 2: Cataloging and Data Entry
-
Numbering: Each artifact is assigned a unique catalog number that links it to its precise provenience (the site, excavation unit, and level where it was found). This number is written directly on the artifact or on an attached tag.
-
Database Entry: All information about the artifact (material, type, weight, measurements, provenience) is entered into a database. This database is essential for subsequent quantitative analysis.
Step 3: Specialized Analyses Based on the research design, specialized analyses are conducted:
-
Lithic Analysis: The study of stone tools to understand technology, function, and raw material procurement.
-
Ceramic Analysis: The study of pottery to determine vessel form, function, manufacturing techniques, and cultural affiliation.
-
Flotation: Processing of soil samples in water to separate light materials (seeds, charcoal) from heavy materials (soil, tiny artifacts). The recovered plant remains are then identified (paleoethnobotanical analysis).
-
Faunal Analysis: Identification and analysis of animal bones to reconstruct diet, subsistence strategies, and past environments.
-
Chronometric Dating: Submission of appropriate samples (e.g., charcoal, organic residues) for dating via methods like Accelerator Mass Spectrometry (AMS) radiocarbon dating.
Data Presentation and Reporting
Table 1: Example of Artifact Catalog Data
| Catalog No. | Unit | Level | Provenience (N, E, Z) | Artifact Class | Description | Weight (g) |
|---|---|---|---|---|---|---|
| 2025.01.01 | 1 | 3 | 100.5, 201.2, 98.7 | Lithic | Chert Flake | 5.2 |
| 2025.01.02 | 1 | 3 | 100.6, 201.4, 98.6 | Ceramic | Shell-tempered body sherd | 12.8 |
| 2025.01.03 | 2 | 2 | 105.1, 200.9, 99.1 | Faunal | Deer phalanx | 8.1 |
Table 2: Example of Archaeological Feature Summary
| Feature No. | Unit | Type | Dimensions (cm) | Associated Samples | Description |
|---|---|---|---|---|---|
| F-01 | 1 | Pit | 85 x 70 x 45 | Flotation, C14 | Circular pit with dark, organic-rich fill and charcoal flecks. |
| F-02 | 3 | Posthole | 15 x 15 x 25 | None | Small, circular stain with defined edges. |
Table 3: Example of Radiocarbon Dating Results
| Lab No. | Sample No. | Material | Conventional Age (BP) | 2-sigma Calibrated Range (AD) |
|---|---|---|---|---|
| Beta-12345 | 2025.01.FS12 | Wood Charcoal | 1150 ± 30 | 780 - 970 |
| Beta-12346 | 2025.01.FS25 | Carbonized Seed | 1120 ± 30 | 860 - 990 |
Application Note 3: Curation and Knowledge Dissemination
The final steps of the data recovery process involve ensuring the long-term preservation of the archaeological collection and sharing the knowledge gained.
-
Curation: All artifacts, field notes, photographs, and other project records must be prepared for permanent curation in an approved facility. This ensures that the collection is available for future research.
-
Dissemination: The findings should be made accessible to the public and the professional community. This can include public lectures, websites, publications in peer-reviewed journals, and presentations at scientific conferences. This step fulfills the public trust component of archaeology and ensures the preserved information contributes to our collective understanding of the past.
References
Application of GIS in Mapping and Analyzing Historic Landscapes for the Advisory Council on Historic Preservation (ACHP)
Application Notes and Protocols for Researchers and Scientists
The integration of Geographic Information Systems (GIS) has revolutionized the field of historic preservation, offering powerful tools for mapping, analyzing, and managing historic landscapes. For the Advisory Council on Historic Preservation (ACHP), which oversees the Section 106 review process, GIS provides a critical framework for identifying historic properties, assessing the effects of federal undertakings, and facilitating consultation among stakeholders. These notes provide an overview and detailed protocols for the application of GIS in this context.
Geographic Information Systems are digital tools adept at managing both geographical and non-spatial attribute data within a unified framework, making them particularly effective for handling the spatio-temporal information inherent in historic landscape analysis.[1] By leveraging GIS, researchers can create detailed digital models of past landscapes, track land use evolution, and conduct spatial analyses that offer unprecedented insights into how our landscapes have evolved.[2] This capability is crucial for the this compound's mission to promote the preservation, enhancement, and sustainable use of our nation's diverse historic resources.
The this compound has increasingly recognized the importance of digital and geospatial information in improving the efficiency of the Section 106 review process.[3] Efforts are underway to improve the availability of digital tools and create a centralized, nationwide map of historic and cultural resources to inform early planning and siting decisions for the approximately 120,000 federal undertakings reviewed annually.
Core Applications of GIS for this compound and Historic Landscape Analysis:
-
Historic Landscape Characterization (HLC): HLC is a primary application of GIS in this field. It involves the seamless mapping of the historic character of the landscape, emphasizing that the historic environment is a continuous entity. HLCs are not merely maps but spatial databases that can be integrated into various planning processes.
-
Predictive Modeling: GIS can be used to create models that predict the probability of finding archaeological sites based on landscape variables. These models are valuable in the initial stages of development planning and for the protection of cultural heritage.
-
Viewshed and Sensory Analysis: GIS tools can analyze the visual impact of proposed projects on historic properties by assessing changes to the integrity of their setting, feeling, and association.
-
Change Detection and Monitoring: By overlaying historical maps and aerial imagery with modern data, GIS can quantify changes in land use and the condition of historic resources over time.
-
Data Management and Accessibility: GIS provides a centralized platform for storing, managing, and sharing vast amounts of spatial and attribute data related to historic properties, which is a key goal of the this compound.
Data Presentation
The quantitative outputs of GIS analysis are crucial for informed decision-making in the Section 106 process. The following tables illustrate how such data can be structured for clarity and comparison.
Table 1: Historic Landscape Characterization (HLC) Area Analysis
| HLC Type | 1850 (acres) | 1950 (acres) | 2020 (acres) | Percent Change (1850-2020) |
| Agricultural Fields | 1,200 | 950 | 600 | -50% |
| Forest Cover | 500 | 700 | 850 | +70% |
| Urban/Developed | 50 | 200 | 400 | +700% |
| Water Bodies | 150 | 150 | 150 | 0% |
Table 2: Predictive Model for Archaeological Site Potential
| Landscape Variable | Weighting Factor | Area Surveyed (acres) | Predicted Sites | Identified Sites | Success Rate |
| Proximity to Water | 0.4 | 500 | 12 | 9 | 75% |
| Slope (< 15%) | 0.3 | 500 | 8 | 7 | 87.5% |
| Southern Aspect | 0.2 | 500 | 5 | 3 | 60% |
| Soil Type (Well-drained) | 0.1 | 500 | 3 | 2 | 66.7% |
Table 3: Section 106 Undertaking Viewshed Analysis
| Historic Property | Total Viewshed (acres) | Viewshed Impacted by Undertaking (acres) | Percentage of Viewshed Impacted | Integrity of Setting (Assessment) |
| Historic Farmstead | 350 | 75 | 21.4% | Adverse Effect |
| Prehistoric Earthwork | 120 | 10 | 8.3% | No Adverse Effect |
| Historic Bridge | 50 | 25 | 50% | Adverse Effect |
Experimental Protocols
Detailed methodologies are essential for ensuring the replicability and validity of GIS-based analyses for historic landscapes.
Protocol 1: Historic Landscape Characterization (HLC) using GIS
Objective: To create a spatial database of historic landscape types to inform preservation planning.
Methodology:
-
Data Acquisition:
-
Collect historical maps (e.g., 19th-century topographic maps, early 20th-century Sanborn maps).
-
Acquire historical aerial photographs and satellite imagery.
-
Gather modern geospatial data including digital elevation models (DEMs), land use/land cover (LULC) data, and property parcel data.
-
Incorporate archival records and textual descriptions of the landscape.
-
-
Georeferencing:
-
Scan and georeference all historical maps and aerial imagery to a common modern coordinate system (e.g., UTM or State Plane). This process involves identifying control points visible on both the historical and modern data to spatially align them.
-
-
Data Digitization and Classification:
-
Digitize key features from the georeferenced historical sources as vector data (points, lines, and polygons).
-
Develop a classification scheme for historic landscape types based on historical land use (e.g., agricultural, industrial, residential, woodland).
-
Create polygons for each distinct landscape type for different time periods.
-
-
Spatial Analysis and Map Creation:
-
Perform overlay analysis to compare the extent and distribution of landscape types across different time periods.
-
Calculate the area of each landscape type and quantify the changes over time.
-
Generate thematic maps illustrating the historic landscape character for each period and maps showing areas of significant transformation.
-
Protocol 2: GIS-Based Viewshed Analysis for Section 106 Review
Objective: To assess the potential visual impact of a proposed federal undertaking on the integrity of a historic property's setting.
Methodology:
-
Data Preparation:
-
Obtain a high-resolution Digital Elevation Model (DEM) or LiDAR data for the project area.
-
Create a GIS layer representing the location and extent of the historic property (the viewpoint).
-
Create a GIS layer representing the three-dimensional form of the proposed undertaking (the target).
-
-
Viewshed Analysis:
-
Using a GIS software's viewshed analysis tool, calculate the areas from which the historic property is visible (its viewshed).
-
Similarly, calculate the viewshed of the proposed undertaking.
-
-
Impact Assessment:
-
Overlay the historic property's viewshed with the location of the proposed undertaking.
-
Determine the extent to which the undertaking will be visible from the historic property and its surrounding landscape.
-
Analyze how the introduction of new visual elements may alter the character of the property's setting, feeling, or association, which are key aspects of its integrity.
-
-
Reporting:
-
Generate maps illustrating the existing viewshed and the areas of potential visual impact.
-
Provide a quantitative summary of the impacted area.
-
Prepare a written assessment of whether the visual impact constitutes an "adverse effect" under Section 106 regulations.
-
Mandatory Visualization
The following diagrams illustrate key workflows and logical relationships in the application of GIS for historic landscape analysis in the context of the this compound's mission.
Caption: Workflow for GIS Application in Historic Landscape Analysis for this compound.
Caption: Integration of GIS into the Four-Step Section 106 Review Process.
References
Application Notes and Protocols for Materials Analysis of Historic Structures for Advisory Council on Historic Preservation (ACHP) Review
Introduction
The analysis of historic building materials is a critical component of the Section 106 review process overseen by the Advisory Council on Historic Preservation (ACHP). This process requires federal agencies to consider the effects of their undertakings on historic properties.[1] Materials analysis informs decisions regarding the preservation, rehabilitation, restoration, or reconstruction of historic structures by identifying original materials, understanding their condition, and developing compatible repair strategies.[2] These application notes provide detailed protocols for the analysis of common historic materials, including mortar, paint, wood, and metal, to ensure that undertakings are consistent with the Secretary of the Interior's Standards for the Treatment of Historic Properties.
I. Regulatory Framework: The Section 106 Process
The Section 106 process, mandated by the National Historic Preservation Act of 1966, is a four-step review process:
-
Initiate the Section 106 Process: The federal agency identifies consulting parties, including the State Historic Preservation Officer (SHPO) or Tribal Historic Preservation Officer (THPO).
-
Identify Historic Properties: The agency identifies historic properties within the area of potential effects.
-
Assess Adverse Effects: The agency, in consultation with the SHPO/THPO, assesses whether the undertaking will have an adverse effect on the historic property. An adverse effect occurs when an undertaking may alter the characteristics that qualify the property for inclusion in the National Register of Historic Places.[3]
-
Resolve Adverse Effects: The agency consults to develop and evaluate alternatives that could avoid, minimize, or mitigate the adverse effects.
Materials analysis is a key component of this process, providing the data necessary to make informed decisions at each step.
II. Experimental Protocols
A. Mortar Analysis
The analysis of historic mortar is crucial for determining its composition to ensure that any new mortar used for repointing is physically and visually compatible with the original. The standard test method for hardened masonry mortar is ASTM C1324.[4][5]
Objective: To determine the binder type (lime, cement), aggregate type, and the volumetric proportions of the original mortar.
Protocol: ASTM C1324 - Examination and Analysis of Hardened Masonry Mortar
-
Sampling:
-
Carefully extract intact mortar samples (approximately 10g) from various locations, ensuring they are representative of the original mortar and not later repairs.
-
Document the sample locations with photographs and drawings.
-
-
Petrographic Examination (ASTM C856):
-
Prepare a thin section or a polished section of a mortar fragment.
-
Examine the sample using a stereomicroscope and a polarized light microscope.
-
Identify the binder type (e.g., lime, natural cement, Portland cement) based on its optical properties and texture.
-
Characterize the aggregate, noting its mineralogy, size, shape, and gradation.
-
Visually estimate the binder-to-aggregate ratio.
-
-
Chemical Analysis (Acid Digestion):
-
This procedure is suitable for mortars with acid-insoluble aggregates (e.g., quartz sand).
-
Crush a representative portion of the mortar sample.
-
Dry the crushed sample to a constant weight.
-
Digest a known weight of the dried sample in dilute hydrochloric acid to dissolve the cementitious binder.
-
Filter the solution to separate the insoluble aggregate.
-
Wash the aggregate with deionized water and dry it to a constant weight.
-
The weight of the dried aggregate is subtracted from the initial sample weight to determine the weight of the acid-soluble binder.
-
Further chemical analysis of the acid-soluble portion can determine the calcium and magnesium content to differentiate between calcitic and dolomitic limes.
-
B. Historic Paint Analysis
Architectural paint analysis is used to identify the color and finish history of a structure. This information is vital for restoration projects aiming to replicate a specific historic appearance.
Objective: To determine the sequence of paint layers (chronology), pigment composition, and binder type.
Protocol: Microscopic Paint Analysis
-
Sampling:
-
Using a scalpel and viewing through a stereomicroscope, carefully remove small, full-depth paint samples (approximately 20-25 µm wide) that include the substrate.
-
Select sample locations that are likely to have a complete and well-preserved paint history.
-
Document the sample locations.
-
-
Cross-Section Preparation:
-
Embed the paint sample in a clear resin block.
-
Grind and polish the embedded sample to reveal a clear cross-section of the paint layers.
-
-
Polarized Light Microscopy (PLM) of Cross-Sections:
-
Examine the polished cross-section under a polarized light microscope at magnifications from 100x to 1000x.
-
Use reflected visible and ultraviolet (UV) light to view the paint stratigraphy. Certain materials, like natural resins, may fluoresce under UV light, aiding in their identification.
-
Document the number, color, and thickness of each paint layer to establish the chromochronology.
-
-
Pigment Identification (Dispersed Sample):
-
Take a separate, minute paint particle (approximately 50 x 50 µm).
-
Place the particle on a microscope slide with a mounting medium and crush it with a cover slip to disperse the individual pigment particles.
-
Examine the dispersed pigments using PLM with transmitted light.
-
Identify pigments by comparing their optical properties (color, shape, refractive index, and polarization colors) to known reference standards.
-
-
Binder Analysis (FTIR Spectroscopy):
-
Attenuated Total Reflectance-Fourier Transform Infrared (ATR-FTIR) spectroscopy can be used to identify the paint binder (e.g., oil, latex, natural resin).
-
A small sample of the paint is placed in direct contact with the ATR crystal.
-
The resulting infrared spectrum is a molecular fingerprint of the material, which can be compared to spectral libraries for identification.
-
C. Wood Identification
Identifying the wood species used in a historic structure is essential for understanding its construction and for selecting appropriate replacement materials.
Objective: To identify the genus and, if possible, the species of wood used in structural and decorative elements.
Protocol: Macroscopic and Microscopic Wood Identification
-
Macroscopic Analysis:
-
Examine a clean-cut surface of the wood with the naked eye and a hand lens (10-20x magnification).
-
Observe features such as color, grain, texture, and the presence of resin canals.
-
Note whether the wood is ring-porous, semi-ring-porous, or diffuse-porous.
-
These macroscopic features can often narrow down the possible wood species.
-
-
Microscopic Analysis:
-
For definitive identification, microscopic examination of the wood's cellular structure is necessary.
-
Prepare thin sections (transverse, radial, and tangential) of a small wood sample.
-
Examine the sections under a light microscope.
-
Identify the arrangement and types of cells (vessels, tracheids, rays, parenchyma) and compare these features to wood anatomy databases and reference collections. The International Association of Wood Anatomists (IAWA) provides standardized lists of microscopic features for wood identification.
-
D. Analysis of Historic Metals
Metallographic analysis reveals the microstructure of historic metals, providing insights into their composition, fabrication techniques, and causes of deterioration.
Objective: To characterize the alloy, determine how it was worked (e.g., cast, wrought, annealed), and identify corrosion products.
Protocol: Metallographic Examination
-
Sampling:
-
Carefully remove a small, representative sample from the metal element. The location of the sample is crucial for understanding the object's history.
-
-
Sample Preparation:
-
Mount the metal sample in a resin block.
-
Grind and polish the surface of the sample to a mirror finish.
-
Etch the polished surface with a suitable chemical etchant to reveal the microstructure.
-
-
Microscopic Examination:
-
Examine the etched sample using a metallurgical microscope.
-
Observe the size, shape, and arrangement of the grains to determine how the metal was formed and if it was heat-treated. For example, a worked and annealed structure will show equiaxed grains with annealing twins.
-
Identify different phases and inclusions within the metal.
-
Examine the as-polished sample to observe features like corrosion and voids.
-
III. Data Presentation
Quantitative data from materials analysis should be summarized in clear, structured tables for easy comparison and interpretation. The following are examples of how to present data for different materials.
Table 1: Mortar Analysis Summary
| Sample Location | Binder Type | Aggregate Composition | Binder:Aggregate Ratio (by volume) | Compressive Strength (if tested) | Notes |
| North Elevation, 1st Floor | High-Calcium Lime | Well-graded quartz sand | 1:3 | N/A | Original bedding mortar |
| West Parapet | Portland Cement and Lime | Poorly-graded sand | 1:1:6 | N/A | Later repair mortar |
Table 2: Paint Layer Chronology
| Layer No. (from substrate up) | Color | Finish (e.g., matte, gloss) | Pigments Identified (PLM) | Binder Type (FTIR) |
| 1 | Light Gray | Matte | Lead white, lampblack | Linseed oil |
| 2 | Cream | Eggshell | Titanium dioxide, yellow ochre | Alkyd resin |
| 3 | Dark Green | Gloss | Chrome green, Prussian blue | Oil-based enamel |
IV. Visualization of Experimental Workflow
The following diagram illustrates the logical workflow for the materials analysis of a historic structure for this compound review.
References
- 1. This compound Regulations Implementing Section 106 | DSC Workflows - (U.S. National Park Service) [nps.gov]
- 2. nps.gov [nps.gov]
- 3. Reaching agreement on appropriate treatment | Advisory Council on Historic Preservation [this compound.gov]
- 4. matestlabs.com [matestlabs.com]
- 5. researchgate.net [researchgate.net]
Application Notes & Protocols for Documenting Scientific Findings for Section 106 Compliance
Introduction
Section 106 of the National Historic Preservation Act of 1966 (NHPA) requires federal agencies to consider the effects of their projects on historic properties.[1][2] For researchers, scientists, and drug development professionals, this may become relevant when a project involves ground disturbance, construction, or other activities that could impact archaeological sites or historic buildings where scientific analysis is a component of the mitigation or documentation process. These application notes provide best practices for documenting and presenting scientific findings to ensure compliance with Section 106.
The core of Section 106 is a four-step process: initiating consultation, identifying historic properties, assessing adverse effects, and resolving adverse effects.[2] Scientific data can be crucial in each of these steps, from identifying the composition of archaeological artifacts to determining the environmental impact on a historic structure.
Data Presentation: Quantitative Data Summaries
All quantitative data derived from scientific analyses should be summarized in clearly structured tables. This facilitates comparison and review by regulatory agencies such as the State Historic Preservation Officer (SHPO) or Tribal Historic Preservation Officer (THPO).
Table 1: Material Composition Analysis of Archaeological Artifacts
| Sample ID | Artifact Type | Location (GPS) | Analytical Method | Key Findings (Quantitative) |
| A-001 | Ceramic Shard | 34.0522° N, 118.2437° W | X-Ray Fluorescence (XRF) | Lead (Pb): 15.2 ppm, Copper (Cu): 5.8 ppm |
| A-002 | Metal Fastener | 34.0522° N, 118.2437° W | Inductively Coupled Plasma Mass Spectrometry (ICP-MS) | Iron (Fe): 98.2%, Nickel (Ni): 1.1% |
| B-001 | Soil Sample | 34.0525° N, 118.2440° W | Gas Chromatography-Mass Spectrometry (GC-MS) | Organic Residue Signature: Present |
Table 2: Environmental Impact Assessment on Historic Structure
| Sample Location | Parameter Measured | Analytical Method | Result | Regulatory Threshold |
| North Wall Exterior | Air Quality (SO₂) | Pulsed Fluorescence | 0.08 ppm | 0.075 ppm |
| Foundation Soil | Heavy Metal Contamination (Lead) | Atomic Absorption Spectroscopy | 250 mg/kg | 400 mg/kg |
| Interior Air | Volatile Organic Compounds (VOCs) | Photoionization Detector | 50 ppb | 100 ppb |
Experimental Protocols: Detailed Methodologies
Detailed methodologies for all key experiments must be provided to ensure transparency and reproducibility of the scientific findings.
Protocol 1: X-Ray Fluorescence (XRF) Analysis of Ceramic Artifacts
-
Objective: To non-destructively determine the elemental composition of ceramic shards to identify potential trade routes and manufacturing techniques.
-
Instrumentation: Handheld Bruker TRACER 5i pXRF Analyzer.
-
Sample Preparation: Samples were cleaned of loose debris using a soft brush. No chemical cleaning agents were used to preserve the integrity of the artifact.
-
Data Acquisition: Each shard was analyzed at three different points for 60 seconds per point. The instrument was calibrated using a certified standard reference material before and after the analysis of each batch of ten samples.
-
Data Analysis: The resulting spectra were processed using the Bruker Artax software. Elemental concentrations were calculated based on the fundamental parameters method.
Protocol 2: Soil Sample Analysis for Environmental Contaminants
-
Objective: To assess the presence and concentration of heavy metals in soil samples collected from the Area of Potential Effects (APE) to determine potential environmental impacts on a historic site.[1][3]
-
Sample Collection: Soil samples were collected from a depth of 0-15 cm at predetermined grid locations within the APE. Samples were stored in sterile, sealed containers and transported to the laboratory on ice.
-
Sample Preparation: Samples were air-dried, sieved to remove large debris, and then subjected to acid digestion using a certified EPA method.
-
Instrumentation: Analysis was performed using a PerkinElmer PinAAcle 900T Atomic Absorption Spectrometer.
-
Quality Control: A blank, a duplicate, and a standard reference material were analyzed for every 20 samples to ensure data quality and accuracy.
Mandatory Visualizations
Diagrams are essential for visualizing complex processes and relationships, making them easier to understand for a diverse audience of stakeholders in the Section 106 process.
Section 106 Compliance Workflow for Scientific Investigations
Caption: Workflow integrating scientific investigation into the Section 106 process.
Signaling Pathway for Environmental Impact on Historic Materials
Caption: Chemical pathway of limestone degradation due to acid rain.
Logical Relationship of Data for National Register Eligibility
Caption: Logical flow from scientific data to a determination of significance.
References
Application Notes and Protocols for Utilizing Digital Twins in Heritage Building Conservation for ACHP Projects
For Researchers, Scientists, and Drug Development Professionals
These application notes provide a comprehensive overview and detailed protocols for the application of digital twin technology in the conservation of heritage buildings, specifically within the framework of Advisory Council on Historic Preservation (ACHP) projects. This document outlines the methodologies for creating and utilizing digital twins, summarizes the quantitative benefits, and proposes a workflow for integrating this technology into the Section 106 review process.
Introduction to Digital Twins in Heritage Building Conservation
A digital twin is a dynamic, virtual representation of a physical heritage building, which is continuously updated with data from its physical counterpart.[1][2] This technology integrates 3D models with various types of data, including historical documentation, material analysis, and real-time monitoring from sensors.[3][4] For heritage conservation, a digital twin serves as a comprehensive information hub that can be used for detailed analysis, simulation of interventions, and long-term preservation planning.[5]
The adoption of digital technologies is becoming more prevalent in historic preservation. Several State Historic Preservation Offices (SHPOs), such as those in Illinois, South Carolina, Ohio, Pennsylvania, and South Dakota, have established guidelines for electronic and digital submissions for project reviews. This move towards digitalization sets a precedent for the use of advanced digital tools like digital twins in federal-level projects overseen by the this compound.
Key Applications in Heritage Conservation:
-
Enhanced Documentation: Creation of highly accurate and detailed 3D models that surpass traditional documentation methods.
-
Preventive Conservation: Real-time monitoring of environmental and structural conditions to identify potential risks before they cause damage.
-
Informed Decision-Making: Simulation of the potential impacts of conservation interventions, allowing for the selection of the most appropriate and least invasive methods.
-
Streamlined Regulatory Review: A digital twin can serve as a comprehensive, data-rich submission for the Section 106 review process, facilitating a more efficient and informed evaluation by the this compound.
Proposed Workflow for Integrating Digital Twins in this compound Section 106 Review
The following diagram illustrates a proposed workflow for utilizing a digital twin to streamline the Section 106 review process. This workflow integrates the creation and application of a digital twin with the key steps of the Section 106 process.
References
- 1. ceur-ws.org [ceur-ws.org]
- 2. emerald.com [emerald.com]
- 3. How Museums Can Preserve Cultural Heritage with Digital Twins Technology [dataart.com]
- 4. Digital Twins and Cultural Heritage Preservation: A Case Study of Best Practices and Reproducibility in Chiesa dei SS Apostoli e Biagio [scirp.org]
- 5. bsmaenterprises.com [bsmaenterprises.com]
Application Notes and Protocols for Assessing Environmental Impacts on Historic Sites in Accordance with the Advisory Council on Historic Preservation (ACHP)
Introduction
These application notes provide a detailed overview of the methodologies established by the Advisory Council on Historic Preservation (ACHP) for assessing the environmental impact of federal undertakings on historic sites. The primary framework for this assessment is the Section 106 review process, mandated by the National Historic Preservation Act (NHPA). This process is designed to ensure that federal agencies consider the effects of their projects on historic properties and consult with relevant stakeholders to avoid, minimize, or mitigate adverse effects.[1][2] While the audience for these notes typically works in scientific research and development, the structured, procedural nature of the Section 106 review is presented here in a format analogous to experimental protocols to facilitate understanding and application in a regulatory context.
The Section 106 process is a critical tool for federal agencies to meet their responsibilities in protecting the nation's historic resources.[2] It involves a four-step process that includes initiating the review, identifying historic properties, assessing potential adverse effects, and resolving those effects through consultation.[3] This framework is also increasingly being used to address the impacts of climate change on historic properties, as outlined in the this compound's recent policy statements.
Protocol 1: The Section 106 Review Process
This protocol details the four-step methodology for conducting a Section 106 review. Federal agencies are required to follow these steps in sequence to fulfill their obligations under the NHPA.
Objective: To take into account the effects of a federal or federally-assisted undertaking on historic properties and to afford the this compound a reasonable opportunity to comment.
Materials:
-
Project plans and specifications for the federal undertaking.
-
Access to state and national registers of historic places.
-
Communication and documentation tools for consultation with stakeholders.
-
Maps and GIS data for the area of potential effects (APE).
Procedure:
Step 1: Initiation of the Section 106 Process
1.1. Determine if the action is a federal "undertaking" : An undertaking is a project, activity, or program funded, permitted, licensed, or approved by a federal agency. 1.2. Establish the potential to affect historic properties : If the undertaking is a type of activity that has the potential to cause effects on historic properties, the Section 106 process must be initiated. 1.3. Identify and notify consulting parties : The federal agency must identify and invite appropriate consulting parties to participate. This includes the State Historic Preservation Officer (SHPO) or Tribal Historic Preservation Officer (THPO), Indian tribes, Native Hawaiian organizations, local governments, and the public. 1.4. Plan to involve the public : Develop a plan for public participation that is appropriate to the scale and complexity of the undertaking.
Step 2: Identification of Historic Properties
2.1. Determine the Area of Potential Effects (APE) : The APE is the geographic area within which an undertaking may directly or indirectly cause alterations in the character or use of historic properties. 2.2. Gather information on historic properties : The federal agency, in consultation with the SHPO/THPO, must gather information to identify any historic properties within the APE. This includes reviewing existing data and may involve new surveys. 2.3. Evaluate historical significance : Apply the National Register of Historic Places criteria to determine if identified properties are eligible for listing. 2.4. Make a finding of "no historic properties affected" or proceed to Step 3 : If no historic properties are present or affected, the agency documents this finding and the process is complete, pending SHPO/THPO concurrence.
Step 3: Assessment of Adverse Effects
3.1. Apply the criteria of adverse effect : In consultation with the SHPO/THPO and other consulting parties, the federal agency applies the criteria of adverse effect to all identified historic properties. An adverse effect occurs when an undertaking may alter the characteristics of a historic property that qualify it for the National Register. 3.2. Make a finding of "no adverse effect" or "adverse effect" :
- If there is no adverse effect, the agency proposes this finding to the SHPO/THPO and provides documentation.
- If an adverse effect is found, the agency must notify the this compound and proceed to Step 4.
Step 4: Resolution of Adverse Effects
4.1. Consultation to resolve adverse effects : The federal agency consults with the SHPO/THPO and other consulting parties to develop measures to avoid, minimize, or mitigate the adverse effects. 4.2. Develop a Memorandum of Agreement (MOA) or Programmatic Agreement (PA) : The outcome of the consultation is typically a legally binding agreement that outlines the agreed-upon measures. 4.3. This compound comment : If consultation fails to result in an agreement, the head of the federal agency must request the comments of the this compound. These comments are taken into account in the final decision-making process.
Data Presentation: Key Criteria and Roles in the Section 106 Process
The following tables summarize key quantitative and qualitative data points used throughout the Section 106 review.
Table 1: Criteria for Adverse Effect (36 CFR § 800.5(a)(1))
| Criterion | Description | Examples |
| Physical Destruction or Damage | The undertaking will physically destroy or damage all or part of the property. | Demolition of a historic building; damage to an archaeological site from construction. |
| Alteration | The undertaking will alter the property in a manner inconsistent with the Secretary of the Interior's Standards for the Treatment of Historic Properties. | Inappropriate additions or renovations; removal of character-defining features. |
| Relocation | The undertaking will cause the relocation of the property from its historic setting. | Moving a historic house to a new location. |
| Change in Use or Setting | The undertaking will change the character of the property's use or physical setting. | Construction of a new highway adjacent to a historic farmstead. |
| Introduction of Intrusive Elements | The undertaking will introduce visual, atmospheric, or audible elements that are out of character with the property. | Erection of a modern cell tower in a historic district; introduction of significant noise or light pollution. |
| Neglect and Deterioration | The undertaking will cause the neglect of a historic property, leading to its deterioration. | Failure to maintain a historic property under federal control. |
| Transfer or Sale | The undertaking involves the transfer, lease, or sale of a historic property out of federal control without adequate preservation restrictions. | Selling a historic federal building to a private entity without protective covenants. |
Table 2: Roles and Responsibilities of Key Participants in the Section 106 Process
| Participant | Key Roles and Responsibilities |
| Federal Agency | Initiates and manages the Section 106 process; makes findings and determinations; consults with other parties; ensures implementation of mitigation measures. |
| State Historic Preservation Officer (SHPO) | Advises and assists federal agencies in carrying out their Section 106 responsibilities; consults on findings and determinations; reflects the interests of the state and its citizens in the preservation of their cultural heritage. |
| Tribal Historic Preservation Officer (THPO) | Assumes the responsibilities of the SHPO on tribal lands; consults on undertakings that may affect historic properties of religious and cultural significance to Indian tribes. |
| Advisory Council on Historic Preservation (this compound) | Oversees the Section 106 process; issues regulations and guidance; may participate in consultation to resolve adverse effects, particularly in complex or controversial cases. |
| Consulting Parties | Includes local governments, applicants for federal assistance, and other individuals or organizations with a demonstrated interest in the undertaking; have a consultative role in the process. |
| The Public | Provided with opportunities to express their views on resolving adverse effects and to receive information about the undertaking. |
Visualizations: Workflows and Logical Relationships
The following diagrams illustrate the key workflows and logical relationships within the Section 106 process.
Caption: Workflow diagram of the four-step Section 106 review process.
Caption: Logical relationships in the Section 106 consultation process.
Caption: Decision logic for determining an adverse effect on a historic property.
References
- 1. An Introduction to Section 106 | Advisory Council on Historic Preservation [this compound.gov]
- 2. Protecting Historic Properties | Advisory Council on Historic Preservation [this compound.gov]
- 3. This compound Regulations Implementing Section 106 | DSC Workflows - (U.S. National Park Service) [nps.gov]
A guide to using the ACHP's e106 submission system for research data.
A Guide to Using the e106 Research Data Submission System
Disclaimer: The Advisory Council on Historic Preservation (ACHP) e106 system is dedicated to submissions under Section 106 of the National Historic Preservation Act, concerning federal undertakings and their effects on historic properties.[1][2][3] It is not designed for the submission of scientific or drug development research data.
The following guide has been created to fulfill the structural and content requirements of the user's request and is intended as a template for what such a guide would look like for a hypothetical "e106 Research Data Submission System" tailored to researchers, scientists, and drug development professionals.
Application Notes and Protocols for the e106 Research Data Submission System
Audience: Researchers, scientists, and drug development professionals.
1.0 Introduction to the e106 Research Data Submission System
The e106 Research Data Submission System is a centralized platform for the submission of preclinical and clinical research data. Its primary purpose is to streamline the review of research findings, enhance data integrity, and facilitate collaboration among researchers. The system accepts a wide range of data types, including but not limited to, in vitro assays, in vivo studies, and clinical trial results.
2.0 User Account and Profile Setup
Before submitting data, all users must create a secure account. The registration process requires affiliation with a recognized research institution or pharmaceutical company. Once registered, users must complete their profile with their ORCID iD, institutional details, and relevant research interests.
3.0 Data Submission Workflow
The data submission process follows a structured workflow to ensure accuracy and completeness.
Caption: e106 data submission workflow.
4.0 Quantitative Data Presentation
All quantitative data must be summarized in structured tables. Below are examples for common experimental types.
Table 1: In Vitro IC50 Data for Compound XYZ-123
| Cell Line | Target Protein | IC50 (nM) | Standard Deviation | N (Replicates) |
| MCF-7 | Kinase A | 15.2 | 2.1 | 3 |
| HeLa | Kinase A | 22.8 | 3.5 | 3 |
| A549 | Kinase B | > 10,000 | N/A | 2 |
Table 2: In Vivo Efficacy of Compound XYZ-123 in Xenograft Model
| Treatment Group | Dose (mg/kg) | Tumor Growth Inhibition (%) | p-value | Animal Count |
| Vehicle Control | 0 | 0 | N/A | 10 |
| Compound XYZ-123 | 10 | 45.3 | < 0.05 | 10 |
| Compound XYZ-123 | 30 | 78.1 | < 0.001 | 10 |
5.0 Experimental Protocols
Detailed methodologies for all key experiments are required.
5.1 Protocol: Cell Viability Assay (MTT)
-
Cell Seeding: Plate cells in a 96-well plate at a density of 5,000 cells/well and incubate for 24 hours at 37°C and 5% CO2.
-
Compound Treatment: Treat cells with a serial dilution of the test compound for 72 hours.
-
MTT Addition: Add 20 µL of MTT reagent (5 mg/mL in PBS) to each well and incubate for 4 hours.
-
Solubilization: Aspirate the media and add 150 µL of DMSO to each well to dissolve the formazan crystals.
-
Data Acquisition: Measure the absorbance at 570 nm using a microplate reader.
-
Data Analysis: Calculate the IC50 values using a non-linear regression analysis.
5.2 Protocol: Western Blot for Target Engagement
-
Protein Extraction: Lyse treated cells with RIPA buffer containing protease and phosphatase inhibitors.
-
Quantification: Determine protein concentration using a BCA assay.
-
Electrophoresis: Separate 20 µg of protein per lane on a 4-12% SDS-PAGE gel.
-
Transfer: Transfer proteins to a PVDF membrane.
-
Blocking and Antibody Incubation: Block the membrane with 5% non-fat milk and incubate with primary antibodies overnight at 4°C, followed by incubation with HRP-conjugated secondary antibodies.
-
Detection: Visualize protein bands using an ECL substrate and an imaging system.
6.0 Signaling Pathway Visualization
Diagrams of relevant signaling pathways are mandatory to provide context for the submitted data.
Caption: Inhibition of the Kinase A signaling pathway by Compound XYZ-123.
References
- 1. Electronic Section 106 Documentation Submittal System (e106) | Advisory Council on Historic Preservation [this compound.gov]
- 2. Notice of Availability Electronic Section 106 Documentation Submittal System (e106) | Advisory Council on Historic Preservation [this compound.gov]
- 3. Protecting Historic Properties | Advisory Council on Historic Preservation [this compound.gov]
Application Notes and Protocols for Innovative Technologies in Heritage Science for ACHP Compliance
Introduction
The preservation of cultural heritage is a critical endeavor, requiring a multidis-ciplinary approach that balances conservation with the documentation needs of regulatory processes like the Section 106 review overseen by the Advisory Council on Historic Preservation (ACHP).[1][2] The integration of innovative technologies into heritage science provides powerful, non-invasive tools for the precise documentation, analysis, and management of historic properties.[3][4] These technologies generate highly accurate and detailed data that can significantly enhance the quality of submissions for this compound compliance, facilitating informed decision-making and the development of effective mitigation strategies.[5]
This document provides detailed application notes and protocols for key innovative technologies used in heritage science, including Terrestrial Laser Scanning (TLS), Digital Photogrammetry, Ground-Penetrating Radar (GPR), and integrated data management systems like Historic Building Information Modeling (HBIM) and Geographic Information Systems (GIS).
Logical Framework for Technology Integration in the Section 106 Process
The selection and application of innovative technologies should be strategically integrated into the Section 106 review process. The following diagram illustrates the logical workflow, from initial project planning to the submission of documentation to the this compound.
Caption: Workflow integrating technology into the Section 106 process.
Terrestrial Laser Scanning (TLS)
Application Note: Terrestrial Laser Scanning (TLS), a form of LiDAR, is a remote sensing technology that rapidly captures dense 3D point cloud data from the surface of structures and landscapes. It is invaluable for creating highly accurate and detailed digital replicas of historic buildings, archaeological sites, and cultural landscapes. For this compound compliance, TLS provides an exhaustive baseline record of a property's condition, which is crucial for assessing potential adverse effects. The resulting point cloud data can be used to generate traditional 2D documentation, such as plans and elevations, as well as 3D models for analysis and visualization.
Experimental Protocol: TLS for a Historic Building Façade
-
Project Scoping and Planning:
-
Define the Level of Detail (LOD) required for the project objectives.
-
Identify the Area of Potential Effects (APE) and the specific features to be documented.
-
Establish a control network around the structure using a total station or survey-grade GPS for georeferencing. Pre-established control points are crucial for ensuring the accurate registration of multiple scans.
-
Plan scanner locations to ensure complete coverage of the façade, minimizing data shadows and occlusions.
-
-
Data Acquisition (Fieldwork):
-
Set up the TLS scanner at the first planned location.
-
Place standardized targets within the scan scene, ensuring at least three targets are visible from overlapping scan positions to facilitate registration.
-
Configure scan parameters (resolution, quality) based on the project's LOD requirements. Higher density is needed for complex architectural details.
-
Initiate the scan. Most scanners will also capture high-resolution color imagery to colorize the point cloud.
-
Repeat the process from all planned scanner locations, maintaining significant overlap between scans.
-
Create a field log documenting scan locations, parameters, and any site-specific issues.
-
-
Data Processing and Registration:
-
Import raw scan data from all positions into registration software (e.g., FARO Scene, Leica Cyclone).
-
Perform an initial registration of the scans using automated or manual target matching.
-
Refine the registration using cloud-to-cloud algorithms to minimize error.
-
Georeference the registered point cloud to the established site control network.
-
Clean the final point cloud to remove noise and unwanted data (e.g., vegetation, passing vehicles).
-
-
Data Product Generation:
-
From the final, registered point cloud, generate required outputs:
-
2D Deliverables: Orthorectified elevations, plans, and sections can be extracted for use in CAD software.
-
3D Deliverables: A complete 3D point cloud model, or a meshed model for visualization and analysis.
-
HBIM Integration: The point cloud serves as the foundational dataset for creating a Historic Building Information Model.
-
-
Workflow Diagram: Terrestrial Laser Scanning```dot
Caption: Workflow for Structure from Motion (SfM) photogrammetry.
Quantitative Data Summary: Photogrammetry
| Data Metric | Description | Typical Values / Format | Relevance to this compound Compliance |
| Ground Sample Distance (GSD) | The real-world size of a single pixel in the orthomosaic. | 0.5cm - 5cm | Defines the resolution of the final map; critical for identifying small features. |
| Geometric Accuracy (RMSE) | Root Mean Square Error of the model fit to the ground control points. | 1cm - 10cm | Quantifies the absolute spatial accuracy of the model and derived measurements. |
| Model Resolution | The density of the 3D mesh (number of polygons). | High (millions of polys) | Determines the level of detail in the 3D representation of the feature. |
| Data Format | The file type of the primary outputs. | GeoTIFF (Orthomosaic), .LAS (Point Cloud), .OBJ (Mesh) | Standard formats for integration with GIS, CAD, and 3D viewing software. |
| Processing Report | Software-generated report detailing accuracy metrics. | Essential metadata for verifying the quality and reliability of the dataset. |
Ground-Penetrating Radar (GPR)
Application Note: Ground-Penetrating Radar (GPR) is a geophysical method that uses radar pulses to image the subsurface. It is a non-invasive and non-destructive technology ideal for identifying and mapping buried archaeological features, unmarked graves, and structural elements within historic buildings without any excavation. In the context of this compound compliance, GPR surveys are highly effective for site evaluation, helping to define the boundaries of archaeological sites and identify sensitive areas that should be avoided during a project.
Experimental Protocol: GPR Survey for Archaeological Site Assessment
-
Project Scoping and Planning:
-
Define the survey objectives (e.g., locate building foundations, identify burials).
-
Assess site conditions (soil type, moisture, surface clutter) to determine GPR suitability and select the appropriate antenna frequency. Lower frequencies (e.g., 250 MHz) penetrate deeper but have lower resolution; higher frequencies (e.g., 500 MHz) offer higher resolution for shallower targets.
-
Establish a survey grid over the area of interest using tapes, survey flags, or a robotic total station for high precision. Grid line spacing should be close enough to detect the desired targets (e.g., 0.25m to 0.5m).
-
-
Data Acquisition (Fieldwork):
-
Calibrate the GPR unit on-site to determine the velocity of the radar waves through the soil, which is necessary for accurate depth calculations.
-
Collect data by moving the GPR antenna along the pre-established grid lines at a consistent pace.
-
Data can be collected in parallel transects in one direction, or in two perpendicular (bidirectional) directions for more comprehensive coverage.
-
Monitor the data display in real-time to ensure good data quality and make adjustments as needed.
-
-
Data Processing and Analysis:
-
Import the raw GPR data into specialized software (e.g., GSSI RADAN, MALA Vision).
-
Apply processing filters to enhance the data, including:
-
Dewow: Removes low-frequency noise.
-
Gain: Amplifies the signal with depth to compensate for signal attenuation.
-
Migration: Focuses reflections to their true subsurface position, clarifying the shape of anomalies.
-
-
Analyze the processed 2D radargrams (profiles) to identify hyperbolic reflections and other anomalies indicative of subsurface features.
-
-
Data Product Generation and Interpretation:
-
Time Slices / Depth Slices: If a grid was collected, the 2D profiles are compiled into a 3D volume. This volume can be "sliced" horizontally at different depths, creating plan-view maps that reveal the spatial layout of anomalies.
-
Interpretation Map: Create a map in GIS or CAD that shows the location and extent of interpreted subsurface features based on the GPR data.
-
Final Report: A report detailing the survey methodology, processing steps, and a full interpretation of the results, including the depth and nature of identified features.
-
Workflow Diagram: Ground-Penetrating Radar```dot
Caption: Integration of HBIM and GIS for heritage management and compliance.
References
- 1. Section 106 Digital Library | Advisory Council on Historic Preservation [this compound.gov]
- 2. Guidance on Agreement Documents | Advisory Council on Historic Preservation [this compound.gov]
- 3. igiprodst.blob.core.windows.net [igiprodst.blob.core.windows.net]
- 4. Technical Article - Digital Technologies in Heritage Conservation | BUILD UP [build-up.ec.europa.eu]
- 5. This compound.gov [this compound.gov]
Troubleshooting & Optimization
Navigating Cultural Heritage Compliance: A Guide to the ACHP's "Reasonable and Good Faith Effort" Standard
Technical Support Center
For researchers, scientists, and drug development professionals, expanding facilities or conducting field research often involves navigating federal regulations designed to protect historic and cultural resources. A key component of this is the Section 106 process of the National Historic Preservation Act, which requires federal agencies to consider the effects of their projects on historic properties. Central to this process is the "reasonable and good faith effort" standard set by the Advisory Council on Historic Preservation (ACHP). This guide provides troubleshooting advice and frequently asked questions to help you navigate this standard effectively.
Frequently Asked Questions (FAQs)
Q1: What is the "reasonable and good faith effort" standard in the context of our research or development project?
A1: The "reasonable and good faith effort" is a standard required under Section 106 of the National Historic Preservation Act to identify historic properties that might be affected by a project.[1][2][3][4] It is not an exhaustive search but rather a process of due diligence.[3] This effort involves a series of steps to gather information about the potential presence of significant historical or cultural sites within your project's area of potential effects (APE). The federal agency funding, licensing, or permitting your project is ultimately responsible for this standard, but as an applicant, your cooperation and understanding are crucial.
Q2: Our project is on a tight timeline. When should we start considering the "reasonable and good faith effort"?
A2: Early initiation is critical. The Section 106 process should begin at the outset of project planning to run concurrently with other review processes like the National Environmental Policy Act (NEPA). Addressing these requirements early helps avoid delays and potential conflicts later in your project timeline. Thinking of Section 106 as an afterthought can significantly hamper project progression.
Q3: What specific activities are involved in making a "reasonable and good faith effort"?
A3: The effort can include a variety of activities, such as background research, consultation with relevant parties, oral history interviews, sample field investigations, and archaeological surveys. The specific combination of activities will depend on the nature and scale of your project.
Q4: We are a private entity. Does this standard still apply to our work?
A4: If your project requires a federal permit, license, or funding, then Section 106 and the "reasonable and good faith effort" standard apply. The federal agency providing the permit or funding is responsible for compliance, but they will often rely on you as the applicant to provide necessary information and support the process.
Q5: What is the "Area of Potential Effects" (APE) and how do we define it for our project?
A5: The APE is the geographic area or areas within which your project may directly or indirectly cause alterations in the character or use of historic properties. It's important to consider all potential effects, including physical damage, as well as visual, audible, or atmospheric changes. The APE must be defined in consultation with the State Historic Preservation Officer (SHPO) or Tribal Historic Preservation Officer (THPO).
Troubleshooting Guide
Issue: We've been told we need to consult with Native American tribes, but we don't know where to start.
Solution:
-
Understand the Requirement: Federal agencies have a government-to-government consultation responsibility with federally recognized tribes. This is a crucial and sensitive part of the process.
-
Early Outreach: The responsible federal agency should initiate contact with tribes that may attach religious and cultural significance to historic properties in the project area. As an applicant, you can support the agency in this effort.
-
Be Respectful and Patient: Tribal consultation should be conducted in a manner that is respectful of tribal sovereignty. It's important to provide a reasonable opportunity for tribes to respond, as they may have their own internal review processes.
-
Seek Guidance: The this compound and the relevant federal agency can provide guidance on identifying appropriate tribal contacts and best practices for consultation.
Issue: The SHPO/THPO has requested more information, and we're concerned about project delays.
Solution:
-
Maintain Open Communication: Engage in a dialogue with the SHPO/THPO to understand their specific concerns and the information they require.
-
Provide Thorough Documentation: Ensure that your submissions are complete and well-organized. This includes clear maps of the APE, detailed project descriptions, and the results of your identification efforts.
-
Leverage Professional Expertise: If you haven't already, consider engaging cultural resource management professionals who are experienced in the Section 106 process. They can help ensure your efforts meet professional standards.
-
Negotiate Scope: The "reasonable and good faith effort" does not require the identification of every single historic property. If you believe the requests are excessive, discuss the scope of the identification efforts with the federal agency and the SHPO/THPO, referencing the factors that guide the level of effort.
Data Presentation
Table 1: Factors for Determining a "Reasonable and Good Faith Effort"
| Factor | Description |
| Past Planning, Research, and Studies | Review of existing archaeological and historical data for the project area. |
| Magnitude and Nature of the Undertaking | The scale and type of project (e.g., a large new facility vs. a small-scale environmental study). |
| Degree of Federal Involvement | The level of federal funding, permitting, or licensing required for the project. |
| Nature and Extent of Potential Effects | The potential for the project to cause direct or indirect harm to historic properties. |
| Likely Nature and Location of Historic Properties | The probability of finding historic properties based on historical and environmental context. |
Experimental Protocols
Protocol: Phase I Archaeological Survey
A Phase I Archaeological Survey is a common method for identifying historic properties within a project's APE.
Objective: To determine the presence or absence of archaeological sites in the APE.
Methodology:
-
Background Research:
-
Conduct a thorough review of existing records at the SHPO/THPO office and other relevant repositories to identify known archaeological sites, historic properties, and previous surveys in and near the APE.
-
Examine historic maps, aerial photographs, and geological surveys to understand the land use history and potential for buried cultural deposits.
-
-
Field Survey:
-
Systematic Pedestrian Survey: Team members walk in transects (straight lines) across the APE, spaced at regular intervals (e.g., 15-30 meters apart), to visually inspect the ground surface for artifacts or other signs of past human activity.
-
Subsurface Testing: In areas with low ground surface visibility (e.g., dense vegetation, pavement), systematic shovel test pits (STPs) are excavated. STPs are typically 30-50 centimeters in diameter and are excavated to sterile subsoil. All excavated soil is screened through 1/4-inch hardware cloth to recover artifacts.
-
Documentation: The location of all identified artifacts and potential archaeological features is recorded using GPS. Detailed notes are taken on soil stratigraphy, ground conditions, and any observed cultural materials.
-
-
Analysis and Reporting:
-
Artifacts are cleaned, cataloged, and analyzed to determine their age and cultural affiliation.
-
A comprehensive report is prepared that includes the survey methodology, a description of the findings, maps showing the survey area and any identified sites, and recommendations for further investigation (e.g., a Phase II survey to evaluate the significance of a site) if necessary.
-
Mandatory Visualizations
Section 106 Consultation Process Workflow
Components of a "Reasonable and Good Faith Effort"
References
- 1. transportation.ohio.gov [transportation.ohio.gov]
- 2. This compound.gov [this compound.gov]
- 3. Determining which archaeological sites are significant: Identification | Advisory Council on Historic Preservation [this compound.gov]
- 4. Section 106 | Economic Development & Finance Authority [opportunityiowa.gov]
Technical Support Center: Archaeological Surveys for ACHP Section 106 Compliance
This technical support center provides troubleshooting guides and frequently asked questions (FAQs) to assist researchers, scientists, and drug development professionals in navigating common issues encountered during archaeological surveys conducted under Section 106 of the National Historic Preservation Act (NHPA) for the Advisory Council on Historic Preservation (ACHP).
Frequently Asked Questions (FAQs)
Q1: What is the purpose of a Phase I archaeological survey under Section 106?
A Phase I archaeological survey is the initial step in the Section 106 process, designed to identify and document any archaeological resources within a project's Area of Potential Effects (APE).[1][2][3][4] Its primary goals are to determine the presence or absence of archaeological sites, define their horizontal and vertical boundaries, and gather enough information to make a preliminary assessment of their potential eligibility for the National Register of Historic Places (NRHP).[4] This phase is crucial for informing project planning and avoiding or minimizing impacts on significant cultural resources.
Q2: When is a Section 106 review and an archaeological survey required?
A Section 106 review is required for any project with a "federal nexus," meaning it is funded, permitted, or licensed by a federal agency. If the federal agency determines that the project is an "undertaking" that has the potential to cause effects on historic properties, then a Section 106 review is initiated. An archaeological survey is typically required as part of this review if the undertaking involves ground-disturbing activities.
Q3: What should I do if I encounter unexpected archaeological remains during my project?
If unanticipated archaeological deposits or human remains are discovered during construction, all work in the immediate vicinity must stop. You should immediately notify the lead federal agency, the State Historic Preservation Officer (SHPO), and/or the Tribal Historic Preservation Officer (THPO). The agency will then consult with these parties to determine the appropriate course of action, which may include further archaeological investigation.
Q4: How do I determine the Area of Potential Effects (APE) for my project?
The APE is the geographic area or areas within which an undertaking may directly or indirectly cause alterations in the character or use of historic properties. The federal agency, in consultation with the SHPO/THPO, is responsible for defining the APE. It should be drawn to encompass all potential direct and indirect effects of the project.
Q5: What is the difference between a "site" and an "isolated find"?
An archaeological "site" is a location where there is evidence of past human activity, which can range from a small cluster of artifacts to a large settlement. An "isolated find" is a single artifact discovered in a context where no other evidence of human activity is present. While isolated finds are typically not considered eligible for the NRHP, they should still be documented as they can contribute to the understanding of past land use.
Troubleshooting Common Issues in Archaeological Surveys
This section provides guidance on how to address common problems that may arise during the planning and execution of an archaeological survey.
| Issue | Potential Cause(s) | Recommended Solution(s) |
| Inconclusive or Ambiguous Survey Results | Dense vegetation obscuring the ground surface; Deeply buried cultural deposits; Inappropriate survey methodology for the landscape. | - Conduct supplemental shovel testing in areas with low surface visibility.- Consider the use of geophysical survey methods (e.g., GPR, magnetometry) to detect subsurface features.- Consult with the SHPO/THPO to determine if alternative survey strategies, such as mechanical trenching, are appropriate for identifying deeply buried sites. |
| Discrepancies with Existing Site Records | Inaccurate original site location data; Site disturbance or destruction since the last recording; Changes in land use or ground cover. | - Attempt to relocate the site using GPS coordinates and detailed site descriptions from the original site form.- Conduct a systematic survey of the area to identify any remaining evidence of the site.- Document any changes to the site's condition and update the site form accordingly. |
| Access to Survey Area is Denied or Restricted | Private landowner refusal; Hazardous environmental conditions; Presence of sensitive resources. | - For private land, the federal agency should make a good faith effort to negotiate access. If access is denied, this should be documented.- For hazardous conditions, assess the risk and determine if the survey can be safely conducted with appropriate personal protective equipment (PPE) or if alternative methods are necessary.- If access is restricted due to sensitive resources, consult with the relevant agencies or tribes to develop a plan that respects these resources while still meeting the goals of the survey. |
| Equipment Malfunction in the Field | Battery failure; Software crashes; Damage to the instrument. | - Carry spare batteries and charging equipment.- Ensure all software is up-to-date before going into the field. Have a backup data recording method (e.g., field notebooks).- Regularly inspect and maintain all survey equipment. For critical equipment, consider having a backup unit available. |
| Inclement Weather | Heavy rain, snow, or extreme temperatures affecting survey conditions and safety. | - Monitor weather forecasts and plan fieldwork accordingly.- Have a contingency plan for days when fieldwork is not possible.- Ensure all team members have appropriate clothing and gear for the expected weather conditions. |
Experimental Protocols
Pedestrian Survey Methodology
A pedestrian survey involves systematically walking over the project area in transects to identify surface artifacts and features.
-
Establish Transects: Lay out parallel transects across the survey area. The spacing between transects should be determined based on ground surface visibility and the expected size of archaeological sites, but is typically no more than 15 meters (50 feet) apart.
-
Systematic Walkover: Team members walk along the transects at a slow, steady pace, carefully examining the ground surface for any signs of past human activity.
-
Flagging and Documentation: When a potential artifact or feature is identified, its location is marked with a pin flag. The location is then recorded using a GPS unit, and the artifact or feature is described in field notes and photographed.
-
Artifact Collection (if applicable): If the project's research design calls for the collection of surface artifacts, each artifact is bagged and labeled with its provenience information.
Shovel Test Pit (STP) Methodology
Shovel test pits are a standard method for identifying subsurface archaeological deposits in areas with low ground surface visibility.
-
Grid Layout: Establish a grid system across the survey area. STPs are typically excavated at regular intervals along this grid, often every 30 meters (approximately 100 feet).
-
Excavation: Each STP should be a standard size, typically 30-50 cm in diameter, and excavated to a depth sufficient to penetrate the topsoil and enter the sterile subsoil, or to a predetermined depth based on local soil conditions.
-
Screening: All excavated soil is screened through 1/4-inch hardware mesh to systematically recover artifacts.
-
Documentation: For each STP, the soil stratigraphy is described in field notes, including soil color (using a Munsell soil color chart), texture, and the presence of any cultural material within each layer. The location of each STP is recorded with a GPS unit.
-
Backfilling: After documentation, all STPs are backfilled with the excavated soil.
Geophysical Survey Methodology
Geophysical surveys use non-invasive techniques to detect and map subsurface archaeological features. Common methods include Ground-Penetrating Radar (GPR), magnetometry, and electrical resistivity.
-
Grid Establishment: A detailed and accurate grid must be established over the survey area to ensure precise spatial control of the data.
-
Data Collection: The chosen geophysical instrument is systematically moved across the grid, collecting data at regular intervals.
-
Data Processing: The raw data is downloaded and processed using specialized software to remove noise and enhance anomalies that may represent archaeological features.
-
Interpretation and Mapping: The processed data is used to create maps that show the location and extent of subsurface anomalies. These maps are then interpreted by a specialist to identify potential archaeological features.
Visualizations
Caption: Workflow of the Section 106 review process.
Caption: Decision-making workflow for troubleshooting common field survey issues.
References
Technical Support Center: Navigating Conflicting Scientific Data in Regulatory Submissions
Welcome to the Technical Support Center, a resource for researchers, scientists, and drug development professionals. This center provides troubleshooting guides and frequently asked questions (FAQs) to help you address conflicting scientific data in your regulatory submissions. Our goal is to provide clear, actionable guidance to ensure your submissions are robust, transparent, and readily interpretable by regulatory agencies.
Frequently Asked Questions (FAQs)
Q1: What are the most common reasons for receiving questions from regulatory agencies about conflicting data?
A1: Regulatory agencies often inquire about conflicting data to ensure the robustness and reliability of your scientific evidence. Common triggers for questions include:
-
Inconsistent results across studies: Discrepancies between non-clinical and clinical findings, or between different non-clinical studies, can raise red flags.[1][2]
-
Lack of a clear scientific narrative: A submission that presents data without a cohesive story explaining how different pieces of evidence fit together can lead to confusion.[1][3]
-
Inadequate explanation of discrepancies: Failing to proactively address and provide a scientifically sound rationale for conflicting results is a major pitfall.
-
Poorly designed studies: Flaws in experimental design can lead to unreliable or contradictory data.
Q2: How should I proactively address conflicting data in my submission?
A2: Transparency and a proactive approach are key. Instead of hoping reviewers won't notice discrepancies, it is best to address them head-on.
-
Acknowledge and Explain: Clearly identify the conflicting data points and provide a scientifically sound explanation for the observed differences. This could involve differences in experimental models, assay sensitivity, or patient populations.
-
Provide a Cohesive Narrative: Weave a compelling scientific story that contextualizes all your data, including the conflicting findings. This narrative should guide the reviewer through your interpretation of the evidence.
-
Utilize Summary Tables and Visualizations: Present data in a clear and organized manner, using tables to summarize key findings across studies and highlight consistencies and explain discrepancies. Visualizations can also help to clarify complex relationships.
-
Conduct Additional Studies (if necessary): In some cases, you may need to conduct further experiments to resolve the conflict and strengthen your submission.
Q3: What is the best way to present conflicting data in a summary document?
A3: The goal is to present the information in a way that is easy for reviewers to understand and digest.
-
Use a Tiered Approach: Present key findings and a summary of the conflicting data in the main body of the document.
-
Provide Detailed Data in Appendices: Include more granular data and detailed analyses in appendices for reviewers who want to dig deeper.
-
Cross-Reference Liberally: Use cross-referencing to guide reviewers to the relevant sections of your submission for more detailed information.
-
Summarize in Tables: A well-structured table can be an effective way to compare and contrast results from different studies.
Troubleshooting Guides
Issue: Discrepant Results Between In Vitro and In Vivo Studies
Scenario: Your in vitro studies show that your drug candidate is a potent inhibitor of a key signaling pathway, but your in vivo animal studies show a much weaker effect.
Troubleshooting Workflow:
Caption: Workflow for addressing conflicting in vitro and in vivo data.
Detailed Steps:
-
Review Experimental Protocols: Carefully compare the protocols for your in vitro and in vivo experiments. Pay close attention to differences in drug concentration, exposure time, and the biological matrix used.
-
Assess DMPK Data: Analyze the drug's absorption, distribution, metabolism, and excretion (ADME) profile. Poor bioavailability or rapid metabolism could explain the reduced efficacy in vivo.
-
Evaluate Target Engagement: Use techniques like Western blotting, immunohistochemistry, or PET imaging to confirm that the drug is reaching its target in the in vivo model and engaging with it at a sufficient level.
-
Consider Off-Target Effects: The in vivo environment is more complex than an in vitro system. Investigate whether the drug has any off-target effects that could be counteracting its intended mechanism of action.
-
Refine Models: It may be necessary to refine your in vivo model to better recapitulate the conditions of the in vitro assay, or vice versa.
-
Conduct Mechanistic Studies: Design and execute additional experiments to specifically investigate the reasons for the discrepancy.
-
Formulate a Rationale: Based on your findings, develop a clear and scientifically sound explanation for the conflicting results.
-
Transparent Presentation: In your submission, present all the data, including the conflicting results, and clearly articulate your rationale for the observed differences.
Issue: Inconsistent Safety Signals Across Different Animal Models
Scenario: A toxicology study in one rodent species shows a specific adverse effect, but this effect is not observed in a non-rodent species.
Data Presentation:
| Species | Dose Group (mg/kg) | Key Finding | Incidence |
| Rat | Low | No adverse effects | 0/10 |
| Rat | Mid | Mild liver enzyme elevation | 4/10 |
| Rat | High | Moderate liver enzyme elevation, histopathological changes | 9/10 |
| Dog | Low | No adverse effects | 0/5 |
| Dog | Mid | No adverse effects | 0/5 |
| Dog | High | No adverse effects | 0/5 |
Troubleshooting and Resolution Pathway:
Caption: Pathway for resolving inconsistent safety signals between species.
Experimental Protocols:
-
Species-Specific Metabolism Study:
-
Objective: To compare the metabolic profile of the drug candidate in rat and dog liver microsomes.
-
Methodology:
-
Incubate the drug candidate with liver microsomes from both species in the presence of NADPH.
-
Analyze the reaction mixture at various time points using LC-MS/MS to identify and quantify metabolites.
-
Compare the metabolic pathways and the rate of metabolite formation between the two species.
-
-
-
Protein Binding Assay:
-
Objective: To determine the extent of plasma protein binding in rat and dog plasma.
-
Methodology:
-
Use rapid equilibrium dialysis to incubate the drug candidate with plasma from both species.
-
Measure the concentration of the free and bound drug in each compartment using a validated analytical method.
-
Calculate the percentage of protein binding for each species.
-
-
By following these troubleshooting guides and adopting a proactive and transparent approach, you can effectively address conflicting scientific data in your regulatory submissions, ultimately increasing the likelihood of a successful outcome.
References
Technical Support Center: Integrating Traditional and Scientific Knowledge for the Advisory Council on Historic Preservation (ACHP)
This technical support center provides troubleshooting guides and frequently asked questions (FAQs) for researchers, scientists, and drug development professionals. Our goal is to facilitate the respectful and effective integration of traditional and scientific knowledge in your work, particularly in alignment with the principles of the ACHP.
Frequently Asked Questions (FAQs)
Q1: What is the primary challenge when integrating Traditional Knowledge (TK) with scientific research?
A1: A primary challenge is the potential for misunderstanding and misapplication of Traditional Knowledge when it is removed from its original cultural, social, and ecological context.[1] TK is often holistic, contrasting with the more compartmentalized approach of Western science.[2] Therefore, it should be understood as a comprehensive system of knowledge, not merely as raw data to be validated by scientific methods.[3]
Q2: How can our research team respectfully engage with Indigenous communities and knowledge holders?
A2: Respectful engagement begins with recognizing Indigenous peoples as experts and rightful stewards of their own knowledge.[4] Key steps include:
-
Building Trust: Establish long-term relationships based on mutual respect and open communication.[5]
-
Ethical Guidelines: Develop and adhere to clear ethical guidelines, ensuring free, prior, and informed consent for all research activities.
-
Collaborative Governance: Include traditional knowledge holders in the decision-making processes from the initial design of the study.
-
Reciprocity: Ensure that the research provides tangible benefits to the community, rather than being purely extractive.
Q3: What are the best practices for documenting Traditional Knowledge?
A3: Documentation should be a collaborative effort with the community. It is crucial to use methods that are culturally appropriate and agreed upon by the knowledge holders. This can include oral histories, storytelling, and community-based monitoring programs. Scientific institutions can play a role by providing financial and technical support for community-led documentation initiatives, including the creation of digital archives.
Q4: How should we handle intellectual property rights related to Traditional Knowledge?
A4: Indigenous peoples have the right to maintain, control, protect, and develop their intellectual property over their cultural heritage and traditional knowledge. Researchers must establish fair benefit-sharing agreements that provide communities with recognition and, where appropriate, financial compensation when their knowledge is used.
Q5: What is the stance of the this compound on integrating Indigenous Knowledge?
A5: The this compound recognizes Indigenous Knowledge as an independent and valid line of evidence in the historic preservation process. The council has developed a policy statement to guide federal agencies in integrating this knowledge into the Section 106 process, which involves identifying historic properties, assessing the effects of undertakings, and resolving adverse effects.
Troubleshooting Guides
This section addresses specific issues that may arise during collaborative research projects.
Issue 1: Discrepancies between Traditional Knowledge and Scientific Data
-
Problem: Your scientific data appears to contradict the information shared by traditional knowledge holders.
-
Troubleshooting Steps:
-
Re-evaluate Scales: Consider the temporal and spatial scales of both knowledge systems. Traditional Ecological Knowledge (TEK) is often based on long-term observations within a specific locality, which may differ from the scale of scientific data collection.
-
Examine Methodology: Review your data collection methods. Are they culturally sensitive and appropriate for the context? The way questions are asked can influence the responses.
-
Facilitate Dialogue: Initiate a respectful dialogue with the knowledge holders to discuss the apparent discrepancies. This can often lead to new insights and a more holistic understanding.
-
Consider Complementarity: View the different knowledge systems as complementary rather than contradictory. They may be revealing different facets of the same complex reality.
-
Issue 2: Lack of Community Engagement in the Research Project
-
Problem: You are struggling to get meaningful participation from the Indigenous community.
-
Troubleshooting Steps:
-
Assess Your Approach: Have you built a foundation of trust? Purely extractive research is unlikely to foster engagement.
-
Identify Community Priorities: Ensure your research questions are relevant and beneficial to the community. Collaborative research should address their needs and concerns.
-
Engage Cultural Advisors: Work with cultural advisors who can guide you on the appropriate protocols for engaging with the community.
-
Invest Time: Building relationships takes time. Be patient and consistent in your engagement efforts.
-
Issue 3: Difficulty in Integrating Different Worldviews in Drug Discovery
-
Problem: Your team is finding it challenging to reconcile the holistic understanding of traditional medicine with the reductionist approach of modern drug discovery.
-
Challenge: Traditional remedies often involve complex mixtures of compounds that act synergistically, which can be difficult to analyze with standard pharmacological methods.
-
Troubleshooting Steps:
-
Adopt a Systems Biology Approach: Instead of focusing on a single active compound, investigate the interactions of multiple components.
-
Reverse Pharmacology: Start with the clinical efficacy of the traditional remedy and work backward to understand its mechanisms of action.
-
Interdisciplinary Collaboration: Foster collaboration between ethnobotanists, pharmacologists, and traditional healers to bridge the conceptual gaps.
-
Quantitative Data Summary
| Integration Metric | Challenge Level (1-5) | Success Factor (1-5) | Key Reference |
| Establishing Trust | 4 | 5 | |
| Data Interpretation | 3 | 4 | |
| Benefit Sharing | 4 | 5 | |
| Policy Integration | 3 | 4 |
Challenge Level: 1=Low, 5=High Success Factor: 1=Low Impact, 5=High Impact
Experimental Protocols
Protocol 1: Community-Based Participatory Research for Ethnobotanical Data Collection
-
Objective: To document traditional knowledge of medicinal plants in a culturally respectful and scientifically rigorous manner.
-
Methodology:
-
Community Engagement: Establish a formal partnership with the Indigenous community and create a joint research committee.
-
Informed Consent: Obtain free, prior, and informed consent from all participants, ensuring they understand the research goals and how the knowledge will be used.
-
Data Collection: Conduct semi-structured interviews and field walks with traditional knowledge holders. Use audio and video recordings with permission.
-
Voucher Specimens: Collect plant specimens for botanical identification and deposit them in a herbarium, with duplicate specimens provided to the community.
-
Data Verification: Review the collected data with the knowledge holders to ensure accuracy and appropriate interpretation.
-
Knowledge Sharing: Develop a plan for sharing the research findings with the community in a culturally appropriate format.
-
Protocol 2: Integrated "Omics" Approach for Analyzing Traditional Herbal Mixtures
-
Objective: To characterize the chemical composition and biological activity of a traditional herbal remedy.
-
Methodology:
-
Metabolomics: Use techniques like LC-MS and NMR to obtain a comprehensive chemical profile of the herbal mixture.
-
Transcriptomics: Treat relevant cell lines or animal models with the herbal extract and analyze changes in gene expression using RNA-sequencing.
-
Proteomics: Analyze changes in protein expression in response to the herbal extract using mass spectrometry.
-
Bioinformatics Analysis: Integrate the "omics" data to identify potential active compounds and signaling pathways targeted by the herbal mixture.
-
In Vivo Validation: Validate the findings from the "omics" analysis in animal models of the relevant disease.
-
Visualizations
Caption: The logical flow from distinct knowledge systems to integrated, sustainable outcomes.
Caption: A streamlined workflow for drug discovery integrating traditional and scientific methods.
References
- 1. The Value of Traditional Ecological Knowledge for the Environmental Health Sciences and Biomedical Research - PMC [pmc.ncbi.nlm.nih.gov]
- 2. indigenousclimatehub.ca [indigenousclimatehub.ca]
- 3. Problems with integrating traditional ecological knowledge into contemporary resource management [fao.org]
- 4. General Interest & Miscellaneous News: this compound Members Approve Groundbreaking Policy Statement on Indigenous Knowledge and Historic Preservation, PreservationDirectory.com - Historic Preservation and Cultural Resource Management Resources and Research Tools for Historical Societies, Organizations and the General Public - [preservationdirectory.com]
- 5. noaa.gov [noaa.gov]
Technical Support Center: Predictive Modeling for At-Risk Cultural Heritage
This technical support center provides troubleshooting guidance and answers to frequently asked questions for researchers and scientists developing predictive models to identify cultural heritage sites at risk.
Frequently Asked Questions (FAQs)
Q1: What is predictive modeling in the context of cultural heritage?
A1: Predictive modeling for cultural heritage uses statistical techniques and algorithms to forecast the location of potential archaeological sites or assess the risk to known sites.[1][2] These models analyze the relationships between the locations of known sites and various environmental, cultural, and historical factors to identify areas with a high probability of containing undiscovered archaeological remains or to predict future threats.[2][3]
Q2: What are the primary applications of predictive modeling for cultural heritage?
A2: The two main applications are:
-
Cultural Heritage Management: To predict archaeological site locations to guide land use planning and development, helping to minimize the destruction of undiscovered sites.[2]
-
Academic Research: To gain insights into past human behavior, settlement patterns, and interactions with the landscape.
Q3: What are the main types of predictive models used in archaeology?
A3: Predictive models in archaeology are primarily categorized as:
-
Inductive Models: These are data-driven and predict site locations based on observed patterns in a sample of known sites. They identify correlations between existing archaeological data and landscape features.
-
Deductive Models: These are theory-driven and predict site locations based on assumptions and fundamental notions about human behavior.
Q4: What are common risk factors for cultural heritage sites?
A4: Cultural heritage sites face threats from multiple factors, including natural disasters (landslides, earthquakes, debris flows), human activities (urban road networks), and the inherent vulnerability of the heritage materials themselves. Climate change, including rising sea levels and extreme weather events, also poses a significant threat.
Q5: How can the interpretability of complex machine learning models be improved?
A5: Techniques like SHAP (SHapley Additive exPlanations) can be used to systematically evaluate the contribution of each influencing factor to the model's prediction. This helps in understanding why a model makes a certain risk assessment, making the results more transparent and trustworthy.
Troubleshooting Guides
This section provides solutions to common problems encountered during the development and application of predictive models for cultural heritage.
| Problem/Error | Possible Cause(s) | Suggested Solution(s) |
| Model predictions are inaccurate or have low predictive power. | Poor quality or availability of input data (incomplete, outdated, low resolution). Lack of relevant environmental, social, or cultural input data. Insufficient spatial or temporal resolution of data. | Ensure high-quality, high-resolution, and up-to-date datasets. Integrate a wider range of variables, including social and cultural factors, alongside environmental data. Utilize data fusion techniques to combine multi-modal survey data (e.g., photogrammetry, laser scanning, thermography). |
| Model is environmentally deterministic, ignoring human factors. | Over-reliance on easily accessible environmental variables (e.g., slope, proximity to water). | Incorporate socio-cultural and historical data to represent the human element in site selection. Develop models based on an understanding of human behavior and cultural systems, rather than just environmental correlations. |
| The model seems to be reinforcing existing biases in the archaeological record. | The model is trained on known sites, leading to a self-fulfilling sampling strategy where new sites are only looked for in areas predicted to have a high density of sites. | Be aware of the potential for a self-fulfilling prophecy. Use models to guide, not dictate, field surveys. It is often a legal requirement to survey an area regardless of the model's prediction. A good model should predict where sites are unlikely to occur as well as where they are likely to occur. |
| Difficulty in integrating and processing data from various sources. | Data is in different formats (GIS, CAD, 3D models), scales, and resolutions. | Develop a robust data fusion pipeline to integrate multi-modal data. Utilize Geographic Information Systems (GIS) to create detailed spatial databases that can synthesize data from diverse sources like satellite imagery, topographic maps, and historical records. |
| The model's decision-making process is a "black box". | Use of complex, non-linear models like deep learning or ensemble methods. | Employ explainable AI (XAI) methods, such as SHAP, to improve the interpretability of the assessment results. |
Quantitative Data Summary
The following tables summarize performance metrics from various predictive modeling studies in cultural heritage.
Table 1: Performance of Different Models in Archaeological Predictive Modeling (APM)
| Model | Study Area | Evaluation Metric (AUC) | Evaluation Metric (Kvamme's Gain) |
| AM_FR (Hybrid) | Japan | Most Satisfactory | 0.78 |
| MaxEnt | Japan | - | 0.92 |
| FR | Japan | - | 0.72 |
| AM_FR (Hybrid) | Shaanxi, China | Most Satisfactory | 0.84 |
| MaxEnt | Shaanxi, China | - | 0.89 |
| FR | Shaanxi, China | - | 0.83 |
| Source: Archaeological Predictive Modeling Using Machine Learning and Statistical Methods for Japan and China. |
Table 2: Risk Assessment of Cultural Heritage Sites
| Model | Study Area | Key Finding |
| LightGBM | Ancient Tea Horse Road, China | 52.36% of cultural heritage sites were classified as at medium and high risk. |
| Source: Cultural Heritage Risk Assessment Based on Explainable Machine Learning Models. |
Table 3: Model Performance in Artwork Analysis
| Model Type | Application | Reported Accuracy |
| Machine Learning Algorithms (e.g., Faster R-CNN, Reinforcement Learning) | Sketch artist attribution | ~92% |
| Source: Machine Learning Models for Artist Classification of Cultural Heritage Sketches. |
Experimental Protocols
Protocol 1: General Workflow for Building an Archaeological Predictive Model
This protocol outlines the fundamental steps for creating and evaluating a predictive model for archaeological site locations.
-
Initial Data Preparation and Cleaning:
-
Gather locational data of known archaeological sites.
-
Collect relevant environmental, topographic, and hydrological data in vector or raster format (e.g., elevation, slope, distance to water sources).
-
Clean the data to remove inaccuracies and ensure consistency. This phase can take up to 50% of the project time.
-
-
Model Creation and Calibration:
-
Select appropriate statistical or machine learning techniques (e.g., Logistic Regression, MaxEnt, LightGBM, Deep Learning).
-
Divide the known site data into a training set (e.g., 70%) and a testing set (e.g., 30%).
-
Train the model on the training dataset to learn the relationships between site locations and the predictive variables.
-
-
Model Finalization and Evaluation:
-
Use the trained model to predict the likelihood of site presence across the entire study area.
-
Test the model's performance using the withheld testing data.
-
Evaluate the model using statistical techniques such as the area under the receiver operating characteristic curve (AUC) and Kvamme's Gain.
-
Generate predictive maps that classify the landscape into different probability zones (e.g., very low, low, medium, high, very high).
-
Protocol 2: Integrated 3D Survey for At-Risk Cultural Heritage
This protocol describes a methodology for creating a comprehensive 3D digital twin of a cultural heritage artifact by integrating multi-modal survey data.
-
Multi-Modal Data Acquisition:
-
High-Resolution Photogrammetry: Capture overlapping images of the artifact from multiple angles to create a detailed 3D model.
-
Laser Scanning: Use terrestrial laser scanning (TLS) to capture precise geometric data of the artifact's surface.
-
Diagnostic Thermography: Use infrared thermography to gather data on the thermal properties of the artifact, which can indicate subsurface defects or material changes.
-
-
Data Fusion and Processing:
-
Develop a data fusion pipeline to integrate the data from the different sources.
-
Use a score-based point cloud denoising algorithm to improve the geometric accuracy of the combined 3D model.
-
-
3D Model Deployment and Analysis:
-
Deploy the integrated and processed 3D model in a Virtual Reality (VR) environment (e.g., using Unreal Engine).
-
The VR application allows for interactive engagement with the artifact, providing access to its geometry, texture, and underlying diagnostic data.
-
This digital twin serves as a powerful analytical tool for conservators and researchers.
-
Visualizations
References
Addressing limitations of current dating techniques in ACHP reports.
Technical Support Center: Archaeological Dating Techniques
This center provides troubleshooting guides and frequently asked questions (FAQs) to address common limitations and issues encountered with archaeological dating methods. It is intended for researchers and scientists requiring precise chronological control for cultural resource management, including the preparation of reports for the Advisory Council on Historic Preservation (ACHP).
Frequently Asked Questions (FAQs)
Q1: Why are my radiocarbon (¹⁴C) dates inconsistent with the expected age from stratigraphy?
A1: Discrepancies between radiocarbon dates and stratigraphic context are common and can stem from several factors:
-
Contamination: The sample may be contaminated with modern carbon (making it appear younger) or ancient carbon (making it appear older). Sources include groundwater, soil humic acids, or improper handling and storage.[1][2]
-
"Old Wood" Problem: The dated material, such as charcoal or a wooden beam, may have been sourced from a tree that was significantly older than the archaeological context in which it was found.[3][4] Timber was often reused in antiquity, further complicating the association between the wood's age and its final deposition.[5]
-
Atmospheric Fluctuations: The concentration of atmospheric ¹⁴C has not been constant over time due to changes in cosmic radiation and the Earth's magnetic field. This requires calibration using established curves (e.g., IntCal20) to convert radiocarbon years (BP) into calendar years (cal BC/AD), a process that can introduce uncertainties, especially in periods with "wiggles" or plateaus in the curve.
-
Reservoir Effects: Samples from marine or freshwater environments can incorporate "old" carbon from their surroundings, leading to dates that are artificially ancient. A reservoir correction must be applied, which has its own associated uncertainty.
Q2: What are the primary limitations of Thermoluminescence (TL) dating?
A2: TL dating is powerful for heated materials like pottery or burnt flint, but it has key limitations:
-
Signal Saturation: Over long periods (typically several hundred thousand years), the mineral grains in a sample can become saturated with radiation, meaning they can't store any more energy. Beyond this point, the method cannot provide an accurate age.
-
Environmental Radiation Measurement: TL dating requires an accurate measurement of the annual radiation dose from the surrounding burial environment (soil, rock). If the object was moved in the past, or if the burial environment has changed (e.g., fluctuations in water content), the calculated age will be incorrect.
-
Incomplete Zeroing: The "clock" is reset when the material is heated to a high temperature (e.g., firing pottery). If the heating event was not hot enough or long enough to completely erase the previous TL signal, the resulting age will be erroneously old.
-
Recent Samples: The method is generally not suitable for very recent artifacts (a few years or decades old) because the amount of radiation absorbed may be too small to detect reliably.
Q3: When is Dendrochronology (Tree-Ring Dating) not a suitable method?
A3: While highly precise, dendrochronology is subject to several constraints:
-
Geographic and Species Limitation: The method is only applicable to tree species that produce distinct annual growth rings, which is not the case for many tropical species. A master chronology for the specific region and species must already exist for cross-dating.
-
Preservation: The wood must be well-preserved enough for the rings to be clearly visible and measurable.
-
Number of Rings: A sample typically needs a minimum of 30 intact rings to establish a confident match with a master chronology.
-
Dating the Event: Dendrochronology dates the felling of the tree (if the outer bark edge is present), not necessarily when the timber was used in a structure. The reuse of old timbers is a common issue that requires careful archaeological interpretation.
Q4: What causes anomalous results in Potassium-Argon (K-Ar) dating?
A4: K-Ar dating is essential for dating volcanic materials over vast timescales but is prone to specific issues:
-
Argon Loss: If the rock or mineral has been subjected to high temperatures or weathering after its formation, some of the radiogenic argon-40 gas may have escaped. This "leaking" resets the radioactive clock, leading to an age that appears younger than the true age.
-
Excess Argon: The sample may incorporate "excess" argon-40 from the environment or from older materials during its formation. This additional argon, not produced by potassium-40 decay within the sample, can make the rock appear significantly older than it is.
-
Age Range Limitation: The method is generally not reliable for samples younger than about 100,000 years, as the amount of argon-40 produced is often too small to be measured accurately.
-
Material Suitability: K-Ar dating is only suitable for potassium-rich volcanic rocks and minerals like feldspar and mica. It cannot be used directly on sedimentary rocks where most artifacts are found.
Troubleshooting Guides
Problem: Received a radiocarbon date that is thousands of years off from expected age.
| Possible Cause | Troubleshooting Steps |
| Sample Contamination | 1. Review sample collection and handling protocols to identify potential sources of modern or ancient carbon introduction. 2. Ensure the laboratory performed appropriate pretreatment steps (e.g., acid-alkali-acid washes) to remove contaminants like humic acids. 3. Submit a new, carefully selected sample from a sealed and undisturbed context for re-analysis. |
| Calibration Curve Issues | 1. Check if the radiocarbon age falls on a known plateau or steep "wiggle" in the calibration curve. 2. Use calibration software (e.g., OxCal) to visualize the probability distribution of the calibrated date. This may show multiple intercepts with the calendar timescale, explaining the ambiguity. 3. If possible, use Bayesian statistical modeling to combine the problematic date with other dates from the site to constrain the age range. |
| "Old Wood" Effect | 1. Assess the context of the sample. Was it from a large, long-lived timber? Could it have been reused from an older structure? 2. If possible, date short-lived samples from the same context (e.g., seeds, twigs) to get a more accurate date for the depositional event. |
Problem: Dating results from different methods are in conflict.
| Possible Cause | Troubleshooting Steps |
| Methods Dating Different Events | 1. Confirm what each method is dating. For example, a TL date on a pot dates its firing, while a ¹⁴C date on food residue inside it dates the residue's origin. These are not the same event. 2. Similarly, a dendrochronology date on a beam dates when the tree was cut, not when the building was destroyed (which might be dated by ¹⁴C on associated charcoal). |
| Systematic Error in One Method | 1. Re-evaluate the assumptions for each method. Was the K-Ar sample from a closed system? Was the environmental dose rate for the TL sample accurately measured and stable over time? 2. Review all potential limitations for each technique used (see FAQs above). |
| Contextual Disturbance | 1. Re-examine the site's stratigraphy for evidence of disturbance, such as animal burrows, pits, or erosion, which could mix materials of different ages. 2. Ensure all dated samples came from a securely sealed and primary depositional context. |
| Need for Cross-Validation | 1. Treat the conflicting results as a research problem. The solution often comes from integrating more lines of evidence. 2. Employ a third, independent dating method if possible. 3. Use cross-dating with diagnostic artifacts (e.g., pottery styles, tool types) from well-dated regional chronologies to see which result is more plausible. |
Quantitative Data Summary
The table below summarizes the effective age ranges and typical precision of common archaeological dating techniques.
| Dating Method | Material Dated | Effective Age Range | Typical Precision | Key Limitations |
| Radiocarbon (¹⁴C) AMS | Organic materials (charcoal, wood, bone, shell) | Up to ~50,000 years | ± 20 to 50 years | Contamination, calibration curve variations, reservoir & old wood effects. |
| Dendrochronology | Wood with distinct ring patterns | Up to ~13,900 years (depending on regional chronology) | To the exact year (if bark edge is present) | Species/geographic limitations, requires master chronology, reuse of timber. |
| Thermoluminescence (TL) | Fired materials (pottery, burnt flint, heated rock) | A few thousand to several hundred thousand years | ± 5% to 10% of the age | Requires accurate environmental dose rate, signal saturation, incomplete zeroing. |
| Potassium-Argon (K-Ar) | Volcanic rock and ash | > 100,000 years to billions of years | ± 1% to 5% | Argon loss/excess, not for young samples, only for volcanic material. |
Experimental Protocols
Protocol 1: Sample Collection for AMS Radiocarbon Dating
This protocol is designed to minimize contamination, a primary source of error in ¹⁴C dating.
-
Site Identification: Identify a secure, undisturbed stratigraphic context for sample collection. Avoid areas with visible root penetration, animal burrows, or evidence of water percolation.
-
Tool Preparation: Use clean, dedicated tools (e.g., steel trowels, tweezers) that have been scrubbed and rinsed with deionized water. Do not touch the sample with bare hands. Wear powder-free nitrile gloves.
-
Sample Extraction:
-
For charcoal , select discrete, identifiable fragments. Do not combine small flecks from a wide area.
-
For bone , select dense cortical bone from a single, identifiable element. Avoid spongy, porous bone.
-
For wood , select samples from the outermost rings if possible to date the felling event.
-
-
Packaging: Immediately place the sample in clean, heavy-duty aluminum foil. Do not use plastic bags, paper, or cardboard, as these can introduce modern carbon. Double-wrap the foil packet.
-
Labeling: Place the foil-wrapped sample in a labeled, sealable polyethylene bag. The label should include site name, context number, sample number, date of collection, and material type. The label should be on the bag, not in direct contact with the sample.
-
Documentation: Record the precise 3D location of the sample, its stratigraphic context, and any associated artifacts. Photograph the sample in situ before collection.
-
Submission: Submit the sample to the laboratory with all contextual documentation. Clearly state the suspected age to help the lab select appropriate procedures and avoid cross-contamination with samples of vastly different ages.
Visualizations
Logical Workflow for Dating and Interpretation
The following diagram illustrates the logical workflow from sample selection to the final interpretation of a date, highlighting key decision points and potential pitfalls.
Caption: Workflow for archaeological dating, from sample collection to final interpretation.
Cross-Validation of Dating Methods
This diagram shows the logical relationship between different dating methods used to overcome individual limitations and build a robust chronology.
Caption: Cross-validation strategy combining multiple dating methods for a robust chronology.
References
Overcoming challenges in the digital documentation of historic properties.
This technical support center provides troubleshooting guides and frequently asked questions (FAQs) to address common challenges encountered during the digital documentation of historic properties. The information is intended for researchers, scientists, and other professionals undertaking digital preservation projects.
Frequently Asked Questions (FAQs)
Data Acquisition: 3D Scanning & Photogrammetry
Q1: What are the most common causes of incomplete 3D scan data and how can I prevent them?
A1: Incomplete data in 3D scans, often appearing as holes or missing sections in the point cloud, is a frequent issue.[1] The primary causes include:
-
Occlusion: Parts of the property are hidden from the scanner's line of sight by other objects (e.g., columns, furniture, vegetation).
-
Surface Properties: Highly reflective, transparent, or very dark surfaces can absorb or scatter the laser light, preventing the scanner from getting a reading.[1]
-
Scanner Placement: Insufficient overlap between individual scans or suboptimal scanner positioning can lead to gaps in the data.
Troubleshooting Steps:
-
Thorough Site Reconnaissance: Before scanning, walk through the site to identify potential occlusions and challenging surfaces. Plan your scanner positions to ensure multiple lines of sight to all critical areas.
-
Multiple Scan Passes: It is rare to capture a complete dataset in a single pass.[1] Conduct multiple scans from different angles and positions to cover all surfaces.
-
Surface Treatment (with caution): For reflective or transparent surfaces, consider using a temporary matting spray. However, always test on a small, inconspicuous area first and ensure it will not damage the historic material.
-
Adjust Scanner Settings: Some scanners allow for adjustment of laser intensity or exposure settings to better capture data from difficult surfaces.[1]
Q2: My point cloud registrations are misaligned. What are the steps to correct this?
A2: Point cloud registration is the process of aligning multiple scans into a single, cohesive model. Misalignment is a common problem that can lead to inaccurate 3D models.
Troubleshooting Workflow for Registration Errors:
-
Sequential Registration: Register your scans in the order they were captured. This makes it easier to identify the point at which an error was introduced.
-
Sufficient Overlap and Control Points: Ensure there is adequate overlap (at least 30-40%) between adjacent scans. Use a sufficient number of common reference points or targets. Misalignment often occurs when there are not enough physical markers to link scans.[1]
-
Review Correlation Errors: After an initial automatic registration, review the error reports. Most software will provide a measure of the alignment accuracy between scans.
-
Manual Refinement: If automatic registration fails, use manual alignment tools in your software to pick common points between scans to guide the alignment process.
-
Remove High-Error Targets: If using physical targets, a single poorly placed or moved target can throw off the entire registration. Remove targets with the highest error values and re-run the registration.
Q3: What are the best practices for image overlap and camera settings in photogrammetry?
A3: High-quality photogrammetric models depend heavily on the quality and consistency of the input photographs.
Best Practices for Photogrammetry Image Acquisition:
-
Image Overlap: Aim for at least 60-80% overlap between adjacent images. This ensures that the software has enough common features to accurately triangulate the camera positions. For drone photogrammetry, an 80% forward and 60% side overlap is a common recommendation.
-
Consistent Lighting: Shoot in diffuse, even lighting conditions, such as on an overcast day, to avoid harsh shadows that can obscure details.
-
Sharp Focus: All images must be in sharp focus. Blurry images will result in poor quality 3D models. Use a tripod and a remote shutter release to minimize camera shake, especially in low-light conditions.
-
Consistent Camera Settings: Use a manual camera mode to keep the aperture, shutter speed, and ISO consistent across all photos in a set. This prevents variations in exposure and depth of field that can confuse the processing software.
-
RAW Image Format: If possible, shoot in RAW format. This provides more flexibility for adjusting exposure and white balance in post-processing without losing image quality.
Data Management & Preservation
Q4: What are the key considerations for the long-term preservation of digital documentation data?
A4: The long-term preservation of digital data is a critical challenge, as digital formats and storage media can become obsolete.
Key Strategies for Long-Term Digital Preservation:
-
Data Management Plan (DMP): A DMP is a formal document that outlines how you will manage your data throughout the project lifecycle and beyond. It should address data formats, metadata standards, storage, and access policies.
-
Use of Open and Standardized Formats: Whenever possible, store your data in open, well-documented file formats. For example, TIFF for images, and OBJ or PLY for 3D models.
-
Metadata: Comprehensive metadata is essential for the long-term usability of your data. It provides context and information about how the data was created and how it can be used. Standards such as PREMIS (Preservation Metadata: Implementation Strategies) are widely used.
-
Redundant Storage: Store multiple copies of your data in different geographic locations to protect against data loss due to hardware failure, natural disasters, or other catastrophic events.
-
Data Migration: Periodically migrate your data to new storage media and file formats to avoid obsolescence. This is an ongoing process that requires active management.
Q5: How should I manage the large file sizes generated during 3D documentation projects?
A5: 3D documentation projects can generate massive datasets, often in the terabytes. Effective data management is crucial for a successful project.
Strategies for Managing Large Datasets:
-
Tiered Storage: Use a tiered storage approach. Keep actively used data on high-performance, easily accessible storage. Move less frequently accessed data to more affordable, long-term archival storage.
-
Data Compression: Use lossless compression techniques to reduce file sizes without sacrificing data quality.
-
Data Processing Workflow: Process data in manageable chunks. For example, register and process scans of individual rooms or sections of a building before merging them into a single model.
-
Hardware Considerations: Invest in a workstation with sufficient RAM (32GB or more is recommended for large projects), a powerful multi-core processor, and a high-end graphics card.
Troubleshooting Guides
Guide 1: Addressing Incomplete Data in 3D Scans
This guide provides a step-by-step process for diagnosing and resolving issues of incomplete data in your 3D scans.
Guide 2: Correcting Texture Mapping Errors in Photogrammetry
This guide outlines a process for identifying and fixing common texture mapping errors in photogrammetric models.
Quantitative Data Summary
The following tables provide a summary of quantitative data related to digital documentation techniques.
Table 1: Comparison of 3D Digitization Techniques
| Technique | Typical Accuracy | Data Acquisition Speed | Cost of Equipment | Best Suited For |
| Terrestrial Laser Scanning (TLS) | 1-5 mm | Medium to Fast | High | Large-scale sites, building exteriors and interiors |
| Structured Light Scanning | 0.01-0.5 mm | Fast | Medium to High | Small to medium-sized artifacts with fine detail |
| Photogrammetry (DSLR) | 2-10 mm | Slow to Medium | Low to Medium | Textured objects, flexible for various scales |
| Drone Photogrammetry | 2-5 cm | Very Fast | Medium | Roofs, large sites, and inaccessible areas |
Note: Accuracy is dependent on the specific equipment, procedures, and processing software used.
Table 2: Estimated Data Storage for a Medium-Sized Historic Building
| Data Type | Raw Data | Processed Data | Archival Data (with metadata) |
| Laser Scans (50 scans) | 500 GB - 1 TB | 100 - 200 GB | 150 - 300 GB |
| Photogrammetry (2000 images) | 100 - 150 GB | 50 - 100 GB | 75 - 150 GB |
| Final 3D Model (textured) | N/A | 10 - 50 GB | 15 - 75 GB |
Experimental Protocols
Protocol 1: 3D Laser Scanning of a Historic Interior
Objective: To create a complete and accurate point cloud of a historic room, including architectural details and any significant artifacts.
Methodology:
-
Preparation:
-
Conduct a visual inspection of the room to identify potential occlusions and reflective surfaces.
-
Place spherical or checkerboard targets in stable locations, ensuring at least three targets are visible from each scanner position.
-
Set up the laser scanner on a tripod in the first planned position.
-
-
Data Acquisition:
-
Perform a low-resolution overview scan to check the scanner's field of view.
-
Set the desired scan resolution and quality settings. For detailed architectural elements, a higher resolution is required.
-
Initiate the high-resolution scan.
-
Once the first scan is complete, move the scanner to the next position, ensuring sufficient overlap with the previous scan.
-
Repeat the scanning process until the entire room has been covered from multiple angles.
-
-
Data Processing:
-
Import the raw scan data into the registration software.
-
Perform an automatic registration using the targets.
-
Review the registration report and manually refine the alignment if necessary.
-
Once all scans are aligned, create a unified point cloud.
-
Clean the point cloud by removing noise and any unwanted artifacts (e.g., people who may have walked through the scan).
-
Export the final point cloud in a standard format (e.g., .E57, .PTX).
-
Protocol 2: Drone Photogrammetry of a Historic Building's Exterior
Objective: To create a high-resolution, textured 3D model of a historic building's exterior, including the roof.
Methodology:
-
Flight Planning:
-
Define the survey area around the building.
-
Plan an automated grid flight path for nadir (top-down) images of the roof, ensuring at least 80% forward and 60% side overlap.
-
Plan one or more orbital or manual flights around the building to capture oblique images of the facades from different heights and angles.
-
Set the camera to a high resolution and shoot in RAW format if available.
-
-
Data Acquisition:
-
Perform a pre-flight checklist to ensure the drone and camera are functioning correctly.
-
Execute the planned flights, monitoring for any changes in weather or lighting conditions.
-
Take additional manual photos of any intricate details that may not be well-captured by the automated flights.
-
-
Data Processing:
-
Import all images into your photogrammetry software.
-
Align the photos to create a sparse point cloud and camera positions.
-
Generate a dense point cloud from the aligned images.
-
Build a 3D mesh from the dense point cloud.
-
Generate a texture map for the 3D mesh.
-
Export the final textured 3D model in a suitable format (e.g., .OBJ, .FBX).
-
References
Strategies for streamlining the environmental review process with ACHP.
This technical support center provides troubleshooting guides and frequently asked questions (FAQs) to assist researchers, scientists, and drug development professionals in navigating the environmental review process, specifically Section 106 of the National Historic Preservation Act, in consultation with the Advisory Council on Historic Preservation (ACHP).
Frequently Asked Questions (FAQs)
Q1: What is Section 106 and why is it relevant to my project?
A1: Section 106 of the National Historic Preservation Act (NHPA) is a federal law that requires federal agencies to consider the effects of their projects, known as "undertakings," on historic properties.[1][2] If your research or development project is funded, permitted, licensed, or approved by a federal agency, it is considered an undertaking and you must comply with Section 106.[2][3] The process involves identifying historic properties, assessing the potential effects of the project on them, and consulting with stakeholders to find ways to avoid, minimize, or mitigate any adverse effects.[3]
Q2: Who are the key consulting parties in the Section 106 process?
A2: The primary consulting parties include the lead federal agency, the State Historic Preservation Officer (SHPO), and/or the Tribal Historic Preservation Officer (THPO). Depending on the project's location and potential impact, other consulting parties may include local governments, applicants for federal assistance, and individuals or organizations with a demonstrated interest in the undertaking. Federally recognized Indian tribes and Native Hawaiian organizations that attach religious and cultural significance to affected historic properties must also be consulted.
Q3: What are the main strategies for streamlining the Section 106 review process?
A3: Several strategies can help streamline the Section 106 process:
-
Programmatic Agreements (PAs): These agreements can be developed for complex projects or multiple undertakings to establish a tailored review process.
-
Alternate Procedures: Federal agencies can develop their own procedures to substitute for the standard Section 106 process, which can result in time and cost savings if approved by the this compound.
-
Integrating NEPA and Section 106: Coordinating the National Environmental Policy Act (NEPA) and Section 106 reviews can improve efficiency and reduce duplication.
-
Program Comments: The this compound can issue a program comment for a particular category of undertakings, which allows agencies to comply with Section 106 by following the steps outlined in the comment.
-
Screening Processes: For programs with many similar undertakings, a screening process can be established to exempt projects with no potential to affect historic properties from further review.
Q4: How can I effectively consult with Indian tribes and Native Hawaiian organizations?
A4: Consultation with Indian tribes must be conducted on a government-to-government basis and be sensitive to tribal sovereignty. It is crucial to engage with potentially interested tribes early in the planning process. You should not assume that one tribe's views will be the same as others, as different tribes may have different interests and approaches to protecting historic properties. The this compound provides specific guidance on consultation with Indian tribes and Native Hawaiian organizations.
Troubleshooting Guide
| Issue | Recommended Solution |
| Delays in SHPO/THPO Response | Ensure all required documentation has been submitted correctly. The this compound has guidance on when 30-day review timeframes are applicable. If a SHPO/THPO is non-responsive, the federal agency may consult with the this compound in their place. |
| Disagreements Among Consulting Parties | The goal of consultation is to seek, discuss, and consider the views of all participants to reach an agreement. If disagreements arise, the federal agency is responsible for managing the consultation process. For significant disagreements, the this compound can provide assistance and may choose to participate in the consultation. |
| Project Design Changes After Section 106 Review | If project designs change in a way that could alter the effects on historic properties, the Section 106 process may need to be revisited. This is especially true if a Memorandum of Agreement (MOA) is in place. |
| Uncertainty about "Adverse Effect" | An "adverse effect" occurs when an undertaking may alter the characteristics of a historic property that qualify it for inclusion in the National Register in a way that diminishes its integrity. The this compound provides guidance on applying the criteria of adverse effect. |
| Complex or Repetitive Projects | For large, complex, or repetitive projects, consider developing a Programmatic Agreement (PA) to streamline the review process for future undertakings. |
Experimental Protocols: Methodologies for Streamlining Section 106
Protocol 1: Developing a Programmatic Agreement (PA)
-
Initiate Consultation: The federal agency notifies the relevant SHPO/THPO and other potential consulting parties of its intent to develop a PA.
-
Define Scope and Purpose: The consulting parties collaboratively define the program, undertakings, or complex project to be covered by the PA.
-
Draft the Agreement: The federal agency, in consultation with the parties, drafts the PA. The agreement should outline the specific procedures for identifying and evaluating historic properties, assessing effects, and resolving adverse effects for the undertakings it covers.
-
Consulting Party Review and Comment: The draft PA is circulated to all consulting parties for review and comment.
-
This compound Involvement: If the PA involves complex or controversial issues, the this compound may be invited to participate in the consultation.
-
Finalize and Execute: After incorporating feedback, the PA is finalized and signed by the consulting parties.
-
File with this compound: A copy of the executed PA is filed with the this compound.
Protocol 2: Utilizing an this compound Program Comment
-
Identify Applicability: Determine if your project falls under a category of undertakings covered by an existing this compound Program Comment.
-
Follow Comment Stipulations: If applicable, the federal agency must follow the specific steps and requirements outlined in the Program Comment to fulfill its Section 106 responsibilities.
-
Notify Consulting Parties: The agency should still notify the relevant SHPO/THPO and other consulting parties of its intention to use the Program Comment.
-
Document Compliance: The agency must maintain a record demonstrating that it has complied with the terms of the Program Comment.
Data Presentation: Comparison of Standard vs. Streamlined Section 106 Process
| Stage | Standard Section 106 Process (Estimated Timeline) | Streamlined Process (e.g., with PA) (Estimated Timeline) |
| Initiation of Consultation | 1-2 weeks | 1-2 weeks |
| Identification of Historic Properties | 4-8 weeks | 2-4 weeks (procedures predefined in PA) |
| Assessment of Adverse Effects | 4-6 weeks | 2-3 weeks (criteria predefined in PA) |
| Resolution of Adverse Effects | 8-12 weeks | 4-6 weeks (process streamlined in PA) |
| Memorandum of Agreement (MOA) | 4-8 weeks | N/A (covered by PA) |
| Total Estimated Timeline | 21-36 weeks | 9-15 weeks |
Note: These are estimated timelines and can vary significantly based on project complexity and the specific circumstances of the consultation.
Visualizations
Caption: Comparison of Standard vs. Streamlined Section 106 workflows.
Caption: The four-step Section 106 review process.
Caption: Relationship between streamlining strategies and outcomes.
References
Technical Support Center: Mitigating Bias in Historic Preservation Analysis
This guide provides troubleshooting advice and frequently asked questions (FAQs) to help researchers, scientists, and conservation professionals identify and mitigate bias in the scientific analysis of historic materials and sites.
Frequently Asked Questions (FAQs)
FAQ 1: How can I mitigate selection and sampling bias when analyzing a large historic structure?
Issue: You are tasked with analyzing the mortar composition of a large 18th-century building. Sampling from easily accessible areas might not represent the entire structure, which may have undergone various repairs or additions over time. This can lead to selection bias, where the samples collected do not accurately represent the whole structure.[1][2][3]
Solution: A stratified sampling strategy is recommended to ensure that different parts of the structure are represented in the sample.[1][4] This method divides the structure into distinct subgroups, or "strata," from which samples are then taken.
Troubleshooting Steps:
-
Identify Strata: Divide the building into logical strata based on known or suspected differences. This can be based on:
-
Construction Phases: Original construction vs. later additions.
-
Environmental Exposure: North-facing walls (more damp) vs. south-facing walls (more sun).
-
Structural Role: Load-bearing walls vs. decorative elements.
-
Observed Condition: Areas with visible decay vs. well-preserved sections.
-
-
Determine Sample Size: Decide on the number of samples to take from each stratum. This can be proportional to the size of the stratum or weighted based on the research question.
-
Randomized Sampling within Strata: Within each defined stratum, use a random sampling method (e.g., using a grid system and random number generator) to select specific sampling locations. This minimizes the chance of unconsciously selecting "interesting" or easily accessible spots.
-
Documentation: Meticulously document the location and rationale for each sample taken. This is crucial for the reproducibility and transparency of your study.
Workflow for Stratified Sampling
FAQ 2: My portable XRF results are inconsistent. How do I troubleshoot for instrumental bias?
Issue: When analyzing a series of metal artifacts believed to be from the same origin, your handheld X-ray Fluorescence (pXRF) spectrometer is giving highly variable readings for key elements. This could be due to instrumental drift, improper calibration, or variations in the sample surface.
Solution: Instrumental bias can be identified and corrected by performing regular calibration checks with Certified Reference Materials (CRMs) or well-characterized internal standards. It is crucial to establish a consistent measurement protocol.
Troubleshooting Protocol:
-
Warm-up Period: Ensure the instrument has been on for the manufacturer-recommended warm-up time to allow the detector and electronics to stabilize.
-
Energy Calibration: Perform the instrument's internal energy calibration check as recommended by the manufacturer.
-
Performance Check with CRM: Before and after a batch of measurements (and periodically during a long session), analyze a CRM with a matrix similar to your artifacts (e.g., a certified bronze alloy).
-
Log Results: Record the CRM readings in a logbook or spreadsheet. Compare these readings to the certified values. If the measured values deviate by more than a predefined tolerance (e.g., >5-10%), the instrument may need recalibration or servicing.
-
Standardize Measurement Conditions: Ensure the distance from the detector to the sample surface is consistent for every measurement. Use a stand or jig if possible.
-
Surface Preparation: Be aware that surface contamination or corrosion can significantly alter results. Document the surface condition and, if permissible, perform a light, localized cleaning on a small area for analysis.
Data Presentation: Example XRF Calibration Check
The following table shows an example log for checking pXRF performance against a certified bronze CRM. A significant deviation in the tin (Sn) reading suggests a potential issue.
| Element | Certified Value (%) | Measured Value (%) | Deviation (%) | Status |
| Copper (Cu) | 88.0 | 87.5 | -0.57 | OK |
| Tin (Sn) | 10.0 | 11.5 | +15.0 | High |
| Lead (Pb) | 1.5 | 1.6 | +6.67 | OK |
| Zinc (Zn) | 0.5 | 0.48 | -4.0 | OK |
FAQ 3: How can I avoid confirmation bias when interpreting analytical data?
Issue: A researcher has a strong hypothesis that a specific pigment was used in a medieval manuscript. This preconceived belief might unconsciously lead them to give more weight to data that supports their hypothesis while dismissing data that contradicts it. This is a classic example of confirmation bias.
Solution: Implement a blind analysis protocol. This involves concealing the sample's identity or context from the analyst until after the data has been processed and preliminary interpretations have been made.
Experimental Protocol: Blind Analysis for Pigment Identification
This protocol is designed for analyzing microscopic samples from a manuscript where multiple pigments are present.
Methodology:
-
Sample Preparation by a Third Party: A technician or colleague (who will not be involved in the data analysis) prepares the samples for analysis (e.g., on microscope slides).
-
Anonymization: Each sample is assigned a random, non-descriptive code (e.g., SAM-001, SAM-002). The key linking the code to the sample's true identity (e.g., "Blue pigment from Folio 12v") is kept in a sealed document by the third party.
-
Data Acquisition: The primary researcher performs the analysis (e.g., Raman spectroscopy, SEM-EDS) on the anonymized samples.
-
Data Processing & Initial Interpretation: The researcher processes the raw data and provides a full interpretation for each coded sample (e.g., "SAM-001 contains lazurite and calcite") without knowing their origins.
-
Formal Review: The methods and interpretations are documented and formally reviewed. The researcher commits to this interpretation in writing.
-
Unblinding: Once the analysis and interpretation are finalized, the third party reveals the key. The researcher can then place their objective findings into the broader historical and material context. This process prevents the context from influencing the raw scientific interpretation.
Workflow for Blind Analysis
References
Technical Support Center: Enhancing Interdisciplinary Collaboration in Complex Research Projects
This technical support center provides troubleshooting guidance and frequently asked questions (FAQs) for researchers, scientists, and drug development professionals engaged in complex, interdisciplinary Advanced Cell Handling and Processing (ACHP) projects. The goal is to mitigate common friction points and foster a more seamless and productive collaborative environment.
Communication Breakdowns & Terminology Mismatches
Effective communication is the cornerstone of any successful collaborative project.[1] In interdisciplinary science, where teams comprise specialists from diverse fields like molecular biology, bioinformatics, and pharmacology, the risk of miscommunication is high.[2][3]
FAQs & Troubleshooting
-
Q: Our team members use the same terms (e.g., "significant," "model," "sample") but seem to mean different things. How can we resolve this?
-
A: This is a classic challenge of interdisciplinary work. The solution is to proactively create a "Project Lexicon" or glossary at the start of the project.
-
Action: Schedule a kickoff meeting dedicated to defining key terms. Have each discipline's lead explain what a term means in their context.
-
Example: A biologist's "sample" (a physical specimen) is different from a statistician's "sample" (a subset of a population). Document these distinctions.
-
Tool: Use a shared document (e.g., Google Doc, Confluence page) that is easily accessible and editable by all team members.
-
-
-
Q: Bioinformaticians and wet-lab scientists are struggling to understand each other's work. How can we bridge this gap?
-
A: Implement "cross-disciplinary primers." These are short, regular presentations where one team member explains a core concept, technique, or challenge from their field to the rest of the group.
-
Action: Dedicate the first 15 minutes of a weekly team meeting to these primers. Encourage questions and discussion. The goal is not to make everyone an expert but to foster mutual understanding and appreciation for the complexities of each other's work.[4]
-
-
-
Q: We have a communication plan, but critical information is still falling through the cracks. What are we doing wrong?
-
A: Your communication plan may be too passive. It's not enough to just have channels; you need a structured workflow for communication, especially at critical project handoffs.
-
Troubleshooting:
-
Identify Critical Handoffs: Pinpoint stages where work moves from one discipline to another (e.g., from wet lab data generation to computational analysis).
-
Establish Formal Check-ins: Institute mandatory, brief meetings at these handoff points to ensure the receiving team understands the data, any anomalies, and the specific questions to be addressed.
-
Use a "Push" System: Don't rely on team members to "pull" information from a shared server. The person completing a task should actively "push" the results and a summary to the next person in the workflow.
-
-
-
A structured communication workflow is essential for minimizing errors and ensuring alignment across disciplines.
Caption: A workflow diagram illustrating critical communication checkpoints.
Data Sharing, Integration, and Reproducibility
Issues with data sharing and management are a primary source of inefficiency and conflict in collaborative research. Inconsistent formatting, inadequate documentation, and unclear data provenance can lead to significant delays and flawed analyses.
FAQs & Troubleshooting
-
Q: Our bioinformatics team spends most of its time just cleaning up and reformatting data from the lab. How can we streamline this?
-
A: Establish a "Tidy Data" protocol and a "Data Dictionary" from the project's outset.
-
Action: Before any data is generated, both teams must agree on a standardized format for data tables (e.g., one variable per column, one observation per row), file naming conventions, and metadata requirements.
-
Data Dictionary: Create a "code book" or "README" template that must accompany every dataset. This file should explicitly define each variable (column header), the units of measurement, and any codes or symbols used in the dataset.
-
-
-
Q: We have conflicting results between two labs working on the same project. How do we determine which result is correct?
-
A: This requires a systematic process of elimination, focusing on transparency and reproducibility.
-
Troubleshooting Steps:
-
Full Protocol Exchange: Both labs must share their complete, detailed experimental protocols, including reagent lot numbers, instrument settings, and any subtle deviations from the standard procedure.
-
Raw Data Review: The analysis team should re-analyze the raw data from both labs using the exact same pipeline. This separates variation from data generation versus data analysis.
-
Blinded Sample Swap: If the discrepancy persists, exchange blinded samples between the labs for re-processing to identify if the issue is systematic to one lab's environment or workflow.
-
-
-
The following logic tree can guide the troubleshooting process for conflicting experimental results between two collaborating groups.
Caption: A decision tree for resolving conflicting experimental data.
Project Management & Credit Allocation
Clear project management, including defined roles, responsibilities, and a fair system for assigning credit, is critical to preventing disputes and keeping projects on track.[5]
FAQs & Troubleshooting
-
Q: Junior team members feel their contributions are not being recognized, leading to low morale. How can we address this?
-
A: Implement a transparent contribution tracking system from the start.
-
Action: Use a system like the CRediT (Contributor Roles Taxonomy) to document each individual's specific contributions (e.g., Conceptualization, Data Curation, Formal Analysis, Investigation, Writing).
-
Visibility: Review these roles periodically in team meetings. This not only ensures fairness but also helps project leaders identify and support team members' career development.
-
-
-
Q: How should we decide authorship order on publications to avoid disputes later?
-
A: Discuss authorship openly and early, and revisit the discussion as the project evolves.
-
Action: In the project planning phase, establish clear criteria for authorship and the principles for determining author order (e.g., based on intellectual contribution, effort, or a combination).
-
Documentation: Create a written authorship agreement that outlines these principles. This document can be updated as roles and contributions change, providing a clear reference point to prevent future misunderstandings.
-
-
Data on the Impact of Collaboration and Communication
While precise data for "this compound projects" is scarce, broader studies on business and project management highlight the significant costs of poor collaboration.
| Impact Area | Supporting Data / Finding | Source |
| Project Failure | 86% of employees and executives cite a lack of collaboration or ineffective communication for workplace failures. | Fierce, Inc. |
| Productivity Loss | Business leaders estimate teams lose the equivalent of 7.47 hours per week per employee due to poor communication. | Grammarly & The Harris Poll |
| Financial Cost | The estimated annual cost of poor communication to U.S. businesses is up to $1.2 trillion. | Grammarly & The Harris Poll |
| Alignment & Success | Over 97% of surveyed professionals believe that a lack of alignment within a team directly impacts the outcome of a project. | Fierce, Inc. |
| Drug Discovery | Collaboration between academia and industry is crucial for accelerating the drug discovery process and overcoming scientific limitations. | AZoLifeSciences |
Experimental Protocol: Interdisciplinary Workflow for Gene Expression Analysis
This protocol outlines a collaborative experiment between a Cell Biology team and a Bioinformatics team to analyze the effect of a novel compound on gene expression in a cancer cell line.
Objective: To identify differentially expressed genes and affected cellular pathways in HT-29 cancer cells after 24-hour treatment with Compound-X.
1. Cell Biology Experimental Methodology
-
1.1 Cell Culture:
-
HT-29 cells are cultured in McCoy's 5A Medium supplemented with 10% Fetal Bovine Serum and 1% Penicillin-Streptomycin.
-
Cells are maintained in a 37°C, 5% CO₂ incubator.
-
Cells are passaged upon reaching 80-90% confluency. All experiments are performed on cells between passages 5 and 10.
-
-
1.2 Experimental Treatment:
-
Cells are seeded into 6-well plates at a density of 5 x 10⁵ cells/well and allowed to adhere for 24 hours.
-
Three wells are treated with 10 µM Compound-X (dissolved in 0.1% DMSO). These are the "Treatment" replicates (n=3).
-
Three wells are treated with 0.1% DMSO vehicle control. These are the "Control" replicates (n=3).
-
Cells are incubated for 24 hours post-treatment.
-
-
1.3 RNA Extraction:
-
After 24 hours, media is aspirated, and cells are washed with PBS.
-
Total RNA is extracted from each well using the Qiagen RNeasy Mini Kit according to the manufacturer's protocol.
-
RNA quality and quantity are assessed using a NanoDrop spectrophotometer (target A260/280 ratio: ~2.0) and an Agilent Bioanalyzer (target RIN score: >8.0).
-
2. Interdisciplinary Checkpoint & Data Handoff
-
2.1 Data Package Preparation:
-
The Cell Biology team prepares a data package containing:
-
Raw Bioanalyzer and NanoDrop files.
-
A completed "Metadata Sheet" (Excel file) with the following columns: Sample_ID, Group (Treatment/Control), Replicate_Number, RIN_Score, RNA_Concentration_ng/uL. The Sample_ID must be a unique identifier used consistently.
-
-
-
2.2 Handoff Meeting:
-
A mandatory meeting is held between the lead biologist and the lead bioinformatician.
-
Agenda: Review the metadata sheet for completeness and clarity. Discuss any anomalies (e.g., one replicate with a slightly lower RIN score). Confirm the primary analytical goal (identifying differentially expressed genes).
-
3. Bioinformatics Experimental Methodology
-
3.1 Data Quality Control (Post-Sequencing):
-
Raw sequencing data (FASTQ files) are received from the sequencing core.
-
Quality is assessed using FastQC to check for per-base quality scores, adapter content, and other metrics.
-
Trimmomatic is used to remove adapter sequences and low-quality reads.
-
-
3.2 Alignment and Quantification:
-
Cleaned reads are aligned to the human reference genome (GRCh38) using the STAR aligner.
-
Gene-level read counts are generated using featureCounts.
-
-
3.3 Differential Expression Analysis:
-
The count matrix is imported into R.
-
Differential gene expression analysis is performed using the DESeq2 package, comparing the "Treatment" group to the "Control" group.
-
Genes with an adjusted p-value < 0.05 and a log₂ fold change > |1| are considered significantly differentially expressed.
-
-
3.4 Pathway Analysis:
-
The list of significant genes is used as input for Gene Set Enrichment Analysis (GSEA) to identify statistically significant, concordantly regulated biological pathways.
-
Results are visualized using volcano plots (for differential expression) and enrichment plots (for GSEA).
-
4. Final Joint Review
-
A final meeting with the entire project team is held to review the bioinformatics report. The bioinformatician presents the key findings, and the cell biologists provide biological context and interpretation to collaboratively decide on the next steps for the project.
References
- 1. 9 Reasons Why Collaborative Projects Fail - SU Social [susocial.com]
- 2. azolifesciences.com [azolifesciences.com]
- 3. Collaborative research in modern era: Need and challenges - PMC [pmc.ncbi.nlm.nih.gov]
- 4. The Role of Cross-disciplinary Collaboration in Drug Discovery [parabolicdrugs.com]
- 5. medium.com [medium.com]
Validation & Comparative
Validating Archaeological Findings: A Comparative Guide to ACHP's Integrity Criteria
For Researchers, Scientists, and Drug Development Professionals in Archaeology
This guide provides a comprehensive comparison of methodologies used to validate archaeological findings against the integrity criteria established by the Advisory Council on Historic Preservation (ACHP). The validation of an archaeological site's integrity is a critical step in determining its historical significance and eligibility for inclusion in the National Register of Historic Places.[1][2][3][4] This process relies on a set of seven aspects of integrity: location, design, setting, materials, workmanship, feeling, and association.
This guide outlines the primary investigative techniques employed by archaeologists to assess these criteria, presenting the methodologies, the nature of the data generated, and illustrative examples.
Comparison of Archaeological Validation Methods
The following table summarizes the key archaeological methods used to assess the integrity of a site, the type of data they generate, and their relevance to the this compound's seven aspects of integrity.
| Method | Purpose | Data Generated (Quantitative & Qualitative) | Relevant Aspects of Integrity |
| Geophysical Survey | To non-invasively map subsurface archaeological features. | - 2D and 3D maps of subsurface anomalies- Gradiometer data (nT)- Resistivity data (ohms)- GPR time-slice maps (nanoseconds) | Location, Design, Setting, Association |
| Stratigraphic Analysis | To understand the chronological sequence of a site's formation and occupation. | - Stratigraphic profile drawings- Harris Matrix diagrams- Soil horizon depth and composition data- Chronological sequencing of contexts | Location, Design, Materials, Association |
| Artifact Distribution Analysis | To identify patterns of human activity across a site. | - Artifact density maps (artifacts/unit area)- Spatial clustering statistics (e.g., Nearest Neighbor Analysis)- Distribution plots of different artifact types | Design, Setting, Association, Feeling |
| Pedestrian Survey | To systematically identify and record surface artifacts and features. | - Artifact counts and densities per survey unit- GPS coordinates of significant finds- Maps of surface feature locations | Location, Design, Setting |
| Test Excavation | To ground-truth remote sensing data and retrieve a sample of subsurface artifacts and features. | - Artifact and ecofact counts and weights- Feature dimensions and descriptions- Soil Munsell color and texture data | Location, Design, Materials, Workmanship, Association |
Experimental Protocols
Detailed methodologies for the key experiments cited are provided below.
Geophysical Survey
Objective: To detect and map buried archaeological features without excavation.
Methodology:
-
Site Grid Establishment: A precise grid system is established over the survey area using a total station or high-precision GPS.
-
Instrument Selection: The choice of instrument (e.g., magnetometer, ground-penetrating radar, electrical resistivity meter) depends on the site's geology and the anticipated types of archaeological features.
-
Data Collection: The instrument is systematically moved across the grid, taking readings at regular intervals. For example, a magnetometer survey might collect data every 0.5 meters along transects spaced 1 meter apart.
-
Data Processing: The raw data is downloaded and processed using specialized software to filter out noise and enhance the visibility of archaeological anomalies.
-
Data Visualization: The processed data is rendered as a plan-view map, with different colors or grayscale values representing variations in the measured physical property.[5]
Example Data: A gradiometer survey might reveal linear anomalies indicative of buried walls, or clusters of high magnetic readings suggesting hearths or pits. The output is a digital map where these features are clearly visible.
Stratigraphic Analysis
Objective: To determine the chronological sequence of deposits and features at a site.
Methodology:
-
Excavation: A test unit or trench is carefully excavated, revealing a vertical profile of the soil layers (strata).
-
Profile Documentation: The exposed profile is cleaned, and a detailed, scaled drawing is made. Photographs are also taken.
-
Context Identification: Each distinct layer (context) is identified based on its color, texture, and composition.
-
Harris Matrix: The relationships between the different contexts are recorded using a Harris Matrix, a diagram that illustrates which layers are above, below, or cut through others.
-
Interpretation: The sequence of contexts is interpreted to reconstruct the history of the site's formation, from the earliest deposits to the most recent.
Example Data: A stratigraphic profile drawing will show the different layers of soil and their relationship to features like pits or postholes. The Harris Matrix provides a logical, flowchart-like representation of the site's chronological development.
Artifact Distribution Analysis
Objective: To understand the spatial patterning of human activities on a site.
Methodology:
-
Systematic Collection: Artifacts are collected from the surface or during excavation from precisely defined horizontal locations (e.g., 1x1 meter grid squares).
-
Artifact Identification and Quantification: The collected artifacts are cleaned, identified, and counted.
-
Data Plotting: The location and quantity of different artifact types are plotted on a site map.
-
Density Mapping: A Geographic Information System (GIS) is often used to create density maps that visualize areas of high and low artifact concentration.
-
Spatial Statistical Analysis: Statistical tests can be applied to determine if the observed patterns are random or represent meaningful clusters of activity.
Example Data: An artifact distribution map might show a high concentration of ceramic sherds in one area, suggesting a domestic space, and a cluster of lithic debitage in another, indicating a tool manufacturing area.
Visualization of the Validation Workflow
The following diagram illustrates the logical workflow for validating archaeological findings against the this compound's integrity criteria.
Caption: Workflow for validating archaeological findings against this compound integrity criteria.
References
- 1. Determining which archaeological sites are significant: Evaluation | Advisory Council on Historic Preservation [this compound.gov]
- 2. This compound.gov [this compound.gov]
- 3. Section 106 Archaeology Guidance - Terms Defined | Advisory Council on Historic Preservation [this compound.gov]
- 4. Section 106 Archaeology Guidance | Advisory Council on Historic Preservation [this compound.gov]
- 5. researchgate.net [researchgate.net]
Unearthing the Past: A Comparative Guide to Geophysical Survey Methods for Archaeological and Cultural Heritage Projects
For researchers, scientists, and drug development professionals venturing into the realm of archaeological and cultural heritage projects (ACHP), a plethora of non-invasive geophysical survey methods stand ready to reveal the secrets hidden beneath the soil. This guide provides a comprehensive comparison of the most prevalent techniques, supported by experimental data and detailed protocols, to aid in the selection of the most appropriate methods for your research objectives.
The primary goal of geophysical surveys in archaeology is to map and characterize subsurface archaeological features without the need for excavation.[1] These methods rely on detecting contrasts in the physical properties of buried materials relative to the surrounding soil.[2] The most commonly employed techniques include Magnetometry, Ground-Penetrating Radar (GPR), and Electrical Resistivity.[3] Emerging and complementary methods such as Electromagnetic (EM) Conductivity and Seismic Surveys also play a crucial role in specific contexts. The success of any archaeological geophysical investigation hinges on a well-designed survey strategy, appropriate instrumentation, and a thorough understanding of the site's geology and archaeological record.[3] An integrated approach, combining multiple geophysical methods, is often recommended to enhance the reliability of the interpretation.[4]
Comparative Analysis of Key Geophysical Survey Methods
The selection of a geophysical survey method is a critical decision that depends on the specific research questions, the nature of the anticipated archaeological features, and the environmental conditions of the site. The following table summarizes the key performance indicators for the most common methods used in this compound.
| Method | Principle | Typical Resolution | Typical Depth of Investigation | Survey Speed | Common Applications in this compound |
| Magnetometry (Gradiometry) | Measures localized variations in the Earth's magnetic field caused by magnetic minerals in the soil or by buried features. | 0.0625 m² | 1-2 meters | High (several hectares per day) | Detection of hearths, kilns, ditches, pits, and ferrous objects. Large-scale site mapping. |
| Ground-Penetrating Radar (GPR) | Transmits high-frequency radio waves into the ground and records the reflected signals from subsurface interfaces. | 0.20-0.66 m depth resolution with a 500 MHz antenna | Up to 10 meters (frequency dependent) | Medium to High | High-resolution mapping of buried structures, walls, floors, voids, and stratigraphy. |
| Electrical Resistivity | Injects an electrical current into the ground and measures the resistance to its flow, which is influenced by soil moisture and composition. | 1 m² | Up to 30 meters | Low to Medium | Detection of buried walls, foundations, ditches, and mapping of soil stratigraphy. |
| Electromagnetic (EM) Conductivity | Induces an electromagnetic field in the ground and measures the secondary field generated by conductive materials. | Variable | Up to 6 meters | High | Rapid mapping of soil variations, detection of metallic objects, and identification of areas with contrasting soil conductivity. |
| Seismic Survey | Generates seismic waves and records the time it takes for them to travel through the subsurface, revealing variations in material density and elasticity. | Lower than GPR | Can exceed 50 meters | Low | Deep subsurface investigations, bedrock mapping, and identification of large-scale features like buried channels. |
Experimental Protocols
The successful application of any geophysical method requires strict adherence to established experimental protocols. The following provides a general overview of the methodologies for the key techniques.
Magnetometer Survey
A fluxgate gradiometer is the most commonly used instrument for archaeological surveys. The survey is typically conducted by walking in a systematic grid pattern across the site, with the instrument recording measurements at regular intervals.
-
Data Acquisition: Data is collected along parallel transects, with a typical spacing of 0.5 meters. Readings are often taken at intervals of 0.125 meters or less along each transect.
-
Instrumentation: A fluxgate gradiometer, which measures the vertical gradient of the magnetic field, is preferred for its ability to resolve small, near-surface features.
-
Data Processing: The collected data is processed to remove noise and enhance anomalies. This may include steps like zero-mean traversing, clipping, and interpolation to create a clear image of the subsurface magnetic variations.
Ground-Penetrating Radar (GPR) Survey
GPR surveys involve pulling an antenna across the ground surface. The antenna transmits and receives electromagnetic pulses, and the data is recorded on a control unit.
-
Data Acquisition: GPR data is collected along closely spaced transects, often 0.5 meters apart or less, to generate a high-resolution 3D picture of the subsurface.
-
Antenna Frequency: The choice of antenna frequency is a critical parameter. Higher frequencies (e.g., 500 MHz) provide higher resolution but have a shallower penetration depth, while lower frequencies (e.g., 250 MHz) penetrate deeper but with lower resolution.
-
Data Processing: GPR data processing includes steps like time-zero correction, background removal, and migration to accurately position reflectors and create time-slice maps that show features at different depths.
Electrical Resistivity Survey
Electrical resistivity surveys are conducted by inserting a series of electrodes into the ground. A current is passed between two electrodes, and the potential difference is measured between two other electrodes.
-
Electrode Configuration: Various electrode arrays can be used, with the Wenner and twin-probe arrays being common in archaeology. The spacing of the electrodes determines the depth of investigation.
-
Data Acquisition: Measurements are typically taken at regular intervals along a grid. For a 2D resistivity imaging survey, a line of multiple electrodes is used to create a vertical profile of the subsurface resistivity.
-
Data Processing: The raw resistance measurements are converted into apparent resistivity values and then inverted using specialized software to create a 2D or 3D model of the subsurface resistivity distribution.
Visualizing the Workflow and Method Selection
To effectively plan and execute a geophysical survey for an this compound, a clear understanding of the overall workflow and the logic behind method selection is crucial. The following diagrams, generated using Graphviz, illustrate these processes.
References
A Researcher's Guide to Independently Verifying Historic Property Significance for the ACHP
For researchers, scientists, and drug development professionals accustomed to rigorous, data-driven analysis, the process of determining the "significance" of a historic property for the Advisory Council on Historic Preservation (ACHP) can appear subjective. However, the methodology is rooted in a structured framework comparable to established research protocols. This guide provides a systematic approach to independently verifying the significance of a historic property, drawing parallels to scientific evaluation and presenting the criteria in a format amenable to quantitative assessment.
The primary benchmark for determining significance is the National Register of Historic Places, which is used by the this compound in its Section 106 review process.[1][2][3][4] A property must meet at least one of four main criteria and possess historic integrity.[1]
Comparative Analysis of Significance Criteria
The significance of a historic property is evaluated against the four National Register Criteria for Evaluation. These criteria can be analogized to distinct categories of impact or "endpoints" in a scientific study.
| Criterion | Description | Analogous Scientific Concept | Key Data Points for Verification |
| Criterion A: Event | Associated with events that have made a significant contribution to the broad patterns of our history. | A key event in a signaling pathway or disease progression. | - Historical records linking the property to the event.- Documentation of the event's importance in local, state, or national history.- Thematic studies or historical contexts that establish the event's significance. |
| Criterion B: Person | Associated with the lives of persons significant in our past. | The role of a specific protein or gene in a biological process. | - Biographical information of the significant individual.- Evidence of the person's important contributions.- Direct association of the property with the productive period of the person's life. |
| Criterion C: Design/Construction | Embodies the distinctive characteristics of a type, period, or method of construction, represents the work of a master, possesses high artistic values, or represents a significant and distinguishable entity whose components may lack individual distinction. | A novel molecular structure or a well-defined and reproducible experimental model. | - Architectural plans and specifications.- Comparison to other examples of the same style or period.- Information about the architect or builder.- Physical evidence of distinctive construction techniques or materials. |
| Criterion D: Information Potential | Has yielded, or may be likely to yield, information important in prehistory or history. | A sample or dataset with the potential for novel discovery through further analysis. | - Archaeological surveys and reports.- Evidence of intact archaeological deposits.- Potential to answer specific research questions. |
Experimental Protocol: The Section 106 Review Process
The Section 106 review process is the "experimental protocol" through which the significance of a historic property is formally assessed, particularly when a federal action may affect it. This process ensures that federal agencies consider the effects of their actions on historic properties.
Key Steps in the Section 106 Protocol:
-
Initiate the Process : The federal agency determines if its proposed undertaking could affect historic properties.
-
Identify Historic Properties : The agency identifies properties in the project's area of potential effects and determines if they are listed or eligible for listing in the National Register.
-
Assess Adverse Effects : The agency, in consultation with the State Historic Preservation Officer (SHPO) and/or Tribal Historic Preservation Officer (THPO), determines if the undertaking will have an adverse effect on the historic property.
-
Resolve Adverse Effects : If an adverse effect is found, the agency consults with the SHPO/THPO and other parties to find ways to avoid, minimize, or mitigate the harm. This often results in a Memorandum of Agreement (MOA).
Quantitative Assessment of Historic Integrity
A property must not only meet one of the four criteria of significance but also possess "integrity." Integrity is the ability of a property to convey its historical significance. The seven aspects of integrity can be systematically evaluated, similar to a multi-parameter assay.
| Aspect of Integrity | Description | Method of Verification |
| Location | The place where the historic property was constructed or the place where the historic event occurred. | - Historic maps and deeds.- Physical evidence of original location. |
| Design | The combination of elements that create the form, plan, space, structure, and style of a property. | - Architectural drawings.- Physical examination of the property.- Comparison with similar properties. |
| Setting | The physical environment of a historic property. | - Historic photographs and descriptions.- Analysis of the surrounding landscape and land use. |
| Materials | The physical elements that were combined or deposited during a particular period of time and in a particular pattern or configuration to form a historic property. | - Physical inspection and materials analysis.- Comparison with original specifications. |
| Workmanship | The physical evidence of the crafts of a particular culture or people during any given period in history or prehistory. | - Examination of construction details and finishes.- Identification of tool marks and construction techniques. |
| Feeling | A property's expression of the aesthetic or historic sense of a particular period of time. | - Qualitative assessment based on the survival of other aspects of integrity. |
| Association | The direct link between an important historic event or person and a historic property. | - Historical documents and records.- Physical evidence connecting the property to the event or person. |
Visualizing the Verification Process
The following diagrams illustrate the workflow for independently verifying the significance of a historic property.
Caption: Workflow for assessing historic property significance.
References
- 1. National Register of Historic Places - Wikipedia [en.wikipedia.org]
- 2. An Introduction to Section 106 | Advisory Council on Historic Preservation [this compound.gov]
- 3. Section 106: National Historic Preservation Act of 1966 | GSA [gsa.gov]
- 4. Historic Preservation - Environmental Review - HUD Exchange [hudexchange.info]
Comparing the effectiveness of different conservation treatments under ACHP.
An essential strategy in modern cancer therapy involves targeting the DNA damage response (DDR) pathways, which are critical for cancer cell survival. Poly (ADP-ribose) polymerase (PARP) inhibitors are a class of targeted therapies that have shown significant efficacy in cancers with deficiencies in homologous recombination repair (HRR), such as those with BRCA1/2 mutations.[1] This guide provides a comparative analysis of the effectiveness of different PARP inhibitors, focusing on their mechanisms of action, clinical efficacy, and the experimental protocols used for their evaluation.
Mechanism of Action: PARP Inhibition and Synthetic Lethality
PARP enzymes, particularly PARP1 and PARP2, are crucial for the repair of DNA single-strand breaks (SSBs).[2][3] When SSBs occur, PARP binds to the damaged DNA and synthesizes poly (ADP-ribose) (PAR) chains, which recruit other DNA repair proteins.[2][4] PARP inhibitors work through two primary mechanisms:
-
Catalytic Inhibition : PARP inhibitors compete with the natural substrate NAD+, blocking the synthesis of PAR chains and preventing the recruitment of repair proteins. This leads to the accumulation of unrepaired SSBs.
-
PARP Trapping : Some PARP inhibitors not only block catalytic activity but also "trap" the PARP enzyme on the DNA at the site of the break. This trapped PARP-DNA complex is highly cytotoxic as it can stall and collapse replication forks, leading to the formation of double-strand breaks (DSBs).
In healthy cells, DSBs are efficiently repaired by the homologous recombination repair (HRR) pathway, which relies on functional BRCA1 and BRCA2 proteins. However, in cancer cells with mutations in BRCA1/2 or other HRR genes (a state known as homologous recombination deficiency or HRD), these DSBs cannot be accurately repaired. The accumulation of DSBs triggers genomic instability and ultimately leads to cell death. This concept, where the combination of two non-lethal defects (in this case, HRR deficiency and PARP inhibition) results in cell death, is known as synthetic lethality .
Comparative Efficacy of Approved PARP Inhibitors
Several PARP inhibitors have been approved for clinical use, primarily in ovarian, breast, prostate, and pancreatic cancers. While direct head-to-head trials are limited, network meta-analyses and individual clinical trial data provide a basis for comparison. The main approved inhibitors are Olaparib, Rucaparib, Niraparib, and Talazoparib.
Clinical Efficacy in Ovarian Cancer (Maintenance Therapy)
The following tables summarize Progression-Free Survival (PFS) data from key clinical trials for different PARP inhibitors used as maintenance therapy in platinum-sensitive recurrent ovarian cancer.
Table 1: Progression-Free Survival (PFS) in BRCA-mutated (BRCAm) Ovarian Cancer
| PARP Inhibitor | Trial | Patient Population | Median PFS (Inhibitor vs. Placebo) | Hazard Ratio (HR) (95% CI) | Reference |
|---|---|---|---|---|---|
| Olaparib | SOLO2 | gBRCAm Recurrent | 19.1 vs. 5.5 months | 0.30 (0.22–0.41) | |
| Niraparib | NOVA | gBRCAm Recurrent | 21.0 vs. 5.5 months | 0.27 (0.17–0.41) |
| Rucaparib | ARIEL3 | gBRCAm Recurrent | 16.6 vs. 5.4 months | 0.23 (0.16–0.34) | |
Table 2: Progression-Free Survival (PFS) in the Overall Population (Recurrent Ovarian Cancer)
| PARP Inhibitor | Trial | Median PFS (Inhibitor vs. Placebo) | Hazard Ratio (HR) (95% CI) | Reference |
|---|---|---|---|---|
| Olaparib | Study 19 | 8.4 vs. 4.8 months | 0.35 (0.25–0.49) | |
| Niraparib | NOVA | 8.8 vs. 3.8 months | 0.45 (0.34–0.61) |
| Rucaparib | ARIEL3 | 10.8 vs. 5.4 months | 0.36 (0.30–0.45) | |
A network meta-analysis of several trials concluded that there was no statistically significant difference in efficacy (PFS) among olaparib, niraparib, and rucaparib for the treatment of ovarian cancer.
Comparative Safety Profile
While efficacy appears similar, the toxicity profiles of PARP inhibitors can differ.
Table 3: Common Grade ≥3 Adverse Events (AEs) in Ovarian Cancer Trials
| Adverse Event | Olaparib | Niraparib | Rucaparib | Reference |
|---|---|---|---|---|
| Anemia | 20% | 25% | 25% | |
| Thrombocytopenia | 1% | 34% | 5% | |
| Neutropenia | 5% | 20% | 7% |
| Fatigue | 4% | 8% | 7% | |
Olaparib generally appears to have a more favorable hematological safety profile, with lower rates of severe thrombocytopenia and neutropenia compared to niraparib. Niraparib is associated with the highest rates of severe anemia, thrombocytopenia, and neutropenia.
Experimental Protocols for Evaluating PARP Inhibitors
The evaluation of PARP inhibitors involves a range of in vitro and in vivo experiments to determine their potency, mechanism of action, and efficacy.
PARP Activity Assay
This assay measures the ability of a compound to inhibit the enzymatic activity of PARP. A common method is a colorimetric or chemiluminescent ELISA-based assay.
Principle: The assay measures the incorporation of biotinylated NAD+ onto histone proteins, a reaction catalyzed by the PARP enzyme in a 96-well plate format. The amount of incorporated biotin is then detected using streptavidin-HRP and a substrate that produces a colorimetric or light signal. The signal intensity is proportional to PARP activity.
Brief Protocol:
-
Coating: Coat a 96-well plate with histone proteins.
-
Reaction Setup: Add the PARP enzyme, activated DNA (to stimulate PARP activity), the test inhibitor (e.g., Olaparib) at various concentrations, and biotinylated NAD+.
-
Incubation: Incubate the plate to allow the PARP-catalyzed reaction to occur.
-
Detection: Add Streptavidin-HRP, which binds to the biotinylated histones.
-
Signal Generation: Add a suitable HRP substrate (e.g., TMB for colorimetric detection) and measure the absorbance or luminescence.
-
Data Analysis: Calculate the percentage of inhibition at each drug concentration and determine the IC50 value (the concentration of inhibitor required to reduce PARP activity by 50%).
Cell Viability Assay
Cell viability assays are fundamental for determining the cytotoxic or cytostatic effects of a drug on cancer cells. The MTT or resazurin assays are commonly used colorimetric methods.
Principle: These assays rely on the ability of metabolically active (i.e., viable) cells to reduce a substrate into a colored product. For the MTT assay, the tetrazolium salt MTT is reduced by mitochondrial dehydrogenases to a purple formazan product. The amount of formazan produced is proportional to the number of viable cells.
Brief Protocol:
-
Cell Plating: Seed cancer cells (e.g., a BRCA-mutated cell line) into a 96-well plate and allow them to adhere overnight.
-
Drug Treatment: Treat the cells with a serial dilution of the PARP inhibitor for a specified period (e.g., 72 hours).
-
Add Reagent: Add the MTT solution to each well and incubate for 1-4 hours to allow formazan crystal formation.
-
Solubilization: Add a solubilization solution (e.g., DMSO or a detergent-based solution) to dissolve the formazan crystals.
-
Measurement: Measure the absorbance at approximately 570 nm using a microplate reader.
-
Data Analysis: Plot the percentage of cell viability against the drug concentration to determine the GI50 or IC50 value.
Conclusion
PARP inhibitors represent a significant advancement in the treatment of cancers with HRD, particularly those with BRCA mutations. While the approved inhibitors Olaparib, Rucaparib, and Niraparib demonstrate comparable efficacy in terms of progression-free survival in ovarian cancer, their safety profiles, particularly concerning hematologic toxicities, show notable differences. Talazoparib is recognized as a potent PARP trapper and has shown strong clinical activity in breast cancer. The selection of a specific PARP inhibitor may therefore depend on the tumor type, biomarker status, and the individual patient's tolerance for specific adverse events. The continued development and evaluation of these agents, guided by robust preclinical and clinical experimental protocols, are crucial for optimizing their use in cancer therapy.
References
Navigating the Future of the Past: A Comparative Guide to Validating Climate Change Impact Assessments on Historic Properties
For researchers, scientists, and cultural heritage managers, the escalating threat of climate change to historic properties necessitates robust and reliable impact assessments. This guide provides a comparative analysis of leading methodologies for validating these assessments, enabling informed decisions for the preservation of our shared cultural heritage.
The validation of climate change impact assessments is crucial for developing effective adaptation and mitigation strategies for historic properties. This guide compares four prominent methodologies: Risk Assessment Frameworks, the Climate Vulnerability Index (CVI), Hygrothermal Simulation, and Long-Term Monitoring. Each approach offers unique advantages and is suited to different contexts, resources, and objectives.
Comparative Analysis of Validation Methodologies
The following table summarizes the key quantitative parameters and characteristics of the four primary methodologies for validating climate change impact assessments on historic properties. This allows for a direct comparison of their data requirements, outputs, and typical applications.
| Feature | Risk Assessment Frameworks | Climate Vulnerability Index (CVI) | Hygrothermal Simulation | Long-Term Monitoring |
| Primary Output | Risk maps, indices, and prioritized lists of vulnerable properties. | Vulnerability scores (e.g., low, moderate, high) for Outstanding Universal Value (OUV) and community. | Predictions of future indoor temperature and relative humidity; moisture content in building materials. | Time-series data on environmental parameters and material decay. |
| Core Principle | Risk is a function of Hazard, Exposure, and Vulnerability[1][2]. | Rapid, systematic assessment of climate vulnerability through expert and stakeholder workshops[3][4]. | Dynamic modeling of heat and moisture transfer between the building envelope and the environment[5]. | Direct observation and measurement of climate-induced changes over extended periods. |
| Data Inputs | Climate projections (e.g., temperature, precipitation), property characteristics, material properties, socio-economic data. | Statement of Outstanding Universal Value (SOUV), climate stressor data, expert and stakeholder knowledge. | Building geometry and material properties, local climate data (historical and future scenarios). | On-site measurements of climate variables (e.g., temperature, RH, wind) and material condition. |
| Typical Climate Scenarios | IPCC Scenarios (e.g., RCP 4.5, RCP 8.5, SSP245, SSP585). | High-emissions scenarios are often used to assess worst-case potential impacts. | Various IPCC scenarios to model a range of future conditions. | Not directly applicable for future projections, but provides baseline data for model validation. |
| Spatial Resolution | Can range from regional (e.g., 12x12 km, 25x25 km) to site-specific, depending on the input data. | Site-specific, focused on a single World Heritage property and its associated community. | Building-specific, providing detailed analysis of individual structures or even parts of structures. | Point-specific, based on the locations of sensors within and around the historic property. |
| Timeframe | Projections typically extend to the end of the 21st century (e.g., 2100). | Generally assesses vulnerability for a future period, such as 2050. | Can simulate building performance for various future time slices (e.g., 2050, 2100). | Continuous or interval-based data collection over decades (e.g., 30-50 years). |
| Key Advantage | Provides a structured and widely understood method for prioritizing action across multiple sites. | Rapid and cost-effective method that integrates both scientific data and local knowledge. | Provides detailed, quantitative predictions of the future indoor environment and its impact on materials. | Provides the most accurate, real-world data on the actual impacts of climate change on a specific property. |
| Key Limitation | Can be data-intensive and may oversimplify complex interactions. | The qualitative nature of some inputs can lead to subjectivity. | Requires detailed building information and expertise in building physics; model validation is critical. | Time-consuming and requires a long-term commitment of resources; results are not immediately available for future planning. |
Experimental Protocols and Methodologies
A detailed understanding of the experimental protocols is essential for the critical evaluation and application of these validation methodologies.
Risk Assessment Frameworks
The methodology for a typical risk assessment based on the IPCC framework involves:
-
Hazard Identification: Identifying and characterizing climate-related hazards (e.g., increased frequency of extreme rainfall, heatwaves, sea-level rise) relevant to the location of the historic property. This often involves downscaling global climate models.
-
Exposure Assessment: Determining the extent to which the historic property and its components are exposed to the identified hazards. This includes mapping the location and characteristics of the property in relation to hazard zones.
-
Vulnerability Assessment: Evaluating the susceptibility of the property to damage from the identified hazards. This considers factors such as the materials of construction, the condition of the building, and the capacity of its management to adapt.
-
Risk Characterization: Combining the assessments of hazard, exposure, and vulnerability to determine the level of risk. This is often expressed through risk matrices or indices that help to prioritize adaptation measures.
Climate Vulnerability Index (CVI)
The CVI protocol is a workshop-based methodology that brings together heritage managers, climate scientists, and local stakeholders. The key steps are:
-
Scoping: Defining the scope of the assessment, including the key climate stressors to be considered and the timeframe for the analysis.
-
Outstanding Universal Value (OUV) Vulnerability Assessment: This core component of the CVI evaluates the vulnerability of the property's unique heritage values. It is determined by assessing the exposure of the property to climate stressors, its sensitivity to those stressors, and its adaptive capacity .
-
Community Vulnerability Assessment: Evaluating the economic, social, and cultural vulnerability of the community associated with the historic property.
-
Reporting: Synthesizing the findings into a report that provides a clear statement of the property's climate vulnerability and identifies potential adaptation options.
Hygrothermal Simulation
The protocol for conducting hygrothermal simulations to assess future climate impacts involves:
-
On-site Monitoring: Installing sensors to collect data on the current indoor and outdoor climate conditions of the historic building. This data is essential for model validation.
-
Model Creation: Developing a detailed 3D model of the building in specialized software (e.g., WUFI® Plus), incorporating information on its geometry, construction materials, and their physical properties.
-
Model Calibration and Validation: Adjusting the model parameters until its outputs closely match the monitored data under current climate conditions. This ensures the model accurately represents the building's hygrothermal behavior.
-
Selection of Future Climate Scenarios: Choosing appropriate climate change scenarios (e.g., from the IPCC) to generate future weather data for the building's location.
-
Simulation of Future Performance: Running the validated model with the future weather data to predict the long-term hygrothermal performance of the building and identify potential risks such as mold growth, frost damage, or overheating.
Long-Term Monitoring
A long-term monitoring program is a direct method for validating the impacts of climate change. The protocol includes:
-
Selection of Indicators: Identifying key indicators of climate change impacts, such as rates of material decay, changes in structural stability, or alterations in the indoor microclimate.
-
Zero-Level Registration: Conducting a comprehensive baseline survey of the property's condition at the start of the monitoring period.
-
Interval-Based Registration: Regularly re-surveying the property at predetermined intervals to record changes in the selected indicators.
-
Data Analysis: Analyzing the collected data over time to identify trends and correlate them with observed changes in climate. This data can also be used to validate and improve the accuracy of simulation models.
Visualizing the Validation Process
To further clarify the relationships between these methodologies and the overall validation process, the following diagrams have been created using the Graphviz DOT language.
Caption: General workflow for validating climate change impact assessments.
Caption: Logical relationship of the IPCC Risk Assessment Framework.
References
A Comparative Analysis of International Historic Preservation Standards and the U.S. Advisory Council on Historic Preservation (ACHP)
This guide provides a comparative overview of international historic preservation standards and the regulatory framework of the United States Advisory Council on Historic Preservation (ACHP). The analysis is intended for researchers, scientists, and professionals engaged in drug development who may encounter historic preservation regulations in the context of facility construction and expansion. The guide outlines the core principles, processes, and, where available, the economic impacts associated with these different approaches to safeguarding cultural heritage.
While direct, quantitative experimental data comparing the outcomes of projects under these different standards is not systematically collected on a global scale, this guide summarizes available qualitative comparisons and economic impact studies to offer a comprehensive overview.
Core Principles and Philosophies: A Comparative Overview
International and U.S. historic preservation standards share the common goal of protecting significant cultural resources. However, they are rooted in different philosophical origins and legal structures, which influence their application. International charters, such as the Venice Charter and the Burra Charter, tend to be more philosophical and principles-based, offering a guiding framework for nations to adapt. The this compound's standards, particularly the Section 106 review process, are codified in federal law and represent a more procedural and regulatory approach.
| Feature | International Standards (e.g., Venice Charter, Burra Charter) | This compound Standards (Section 106) |
| Governing Body | Primarily ICOMOS (International Council on Monuments and Sites) and UNESCO | Advisory Council on Historic Preservation (this compound), an independent U.S. federal agency[1][2] |
| Core Philosophy | Emphasizes authenticity, integrity, and the preservation of the historic and aesthetic value of a monument in its setting.[3][4] The Burra Charter introduces the concept of "cultural significance" as a guiding principle.[4] | Focuses on a consultative process to take into account the effects of federal undertakings on historic properties. It is a procedural statute that requires federal agencies to "stop, look, and listen." |
| Legal Framework | Doctrinal texts and charters that are influential but not legally binding on their own. They are intended to guide national legislation. | Mandated by the National Historic Preservation Act of 1966 (NHPA). The process is detailed in federal regulations (36 CFR Part 800). |
| Key Document(s) | Venice Charter (1964), Burra Charter (1979, with revisions) | National Historic Preservation Act of 1966, 36 CFR Part 800 |
| Approach to Alterations | Restoration should be a highly specialized operation, respecting original material and authentic documents. Conjectural reconstruction is generally discouraged. | The Secretary of the Interior's Standards for the Treatment of Historic Properties provide guidance for rehabilitation, restoration, preservation, and reconstruction. Rehabilitation is the most flexible treatment. |
| Public Involvement | Encouraged, but the specific mechanisms are determined by national and local laws. | A key component of the Section 106 process, with defined roles for consulting parties, including State Historic Preservation Officers (SHPOs), Tribal Historic Preservation Officers (THPOs), and the public. |
Methodologies for Comparison of Historic Preservation Policies
Direct experimental protocols for comparing historic preservation standards are not applicable in the same way as in the sciences. Instead, comparative analysis in this field relies on methodologies from the social sciences and humanities.
Methodology 1: Comparative Policy and Legal Analysis
This approach involves a systematic comparison of the legal and policy documents that govern historic preservation in different jurisdictions.
-
Process:
-
Identification of Key Documents: Gather primary legal and policy texts, such as national laws, regulations, and influential international charters.
-
Thematic Analysis: Identify key themes and principles within the documents, such as the definition of heritage, criteria for significance, the process for review and approval of alterations, and the role of public participation.
-
Discourse Analysis: Examine the language and rhetoric used in the documents to understand the underlying values and assumptions of each system.
-
Case Study Application: Analyze how these different legal and policy frameworks are applied in specific case studies to understand their practical implications.
-
Methodology 2: Economic Impact Analysis
This methodology seeks to quantify the economic effects of historic preservation activities. While not a direct comparison of standards, it provides data on the outcomes of preservation activities within a specific regulatory environment.
-
Process:
-
Data Collection: Gather data on direct investments in historic rehabilitation, heritage tourism spending, and the operational budgets of historic sites.
-
Input-Output Modeling: Use economic models to estimate the direct, indirect, and induced economic impacts of these activities on jobs, income, GDP, and tax revenues.
-
Property Value Analysis: Compare property value trends in historic districts with those in non-designated areas to assess the impact of historic designation.
-
Quantitative Data: Economic Impacts of Historic Preservation in the United States
While direct comparative data on project outcomes is scarce, numerous studies have analyzed the economic impact of historic preservation in the United States under the this compound's framework. This data provides a quantitative measure of the benefits associated with the U.S. system.
| Economic Indicator | Impact in the United States |
| Private Investment (Historic Tax Credit) | Over $85 billion in private sector investment between 2001 and 2021 through the Federal Historic Preservation Tax Incentives Program. In 2021 alone, this program saw over $7 billion in private investment. |
| Job Creation | The 2021 investment through the Historic Tax Credit program resulted in 135,000 jobs. |
| Income Generation | The 2021 investment through the Historic Tax Credit program generated $5.3 billion in income. |
| Leverage of Federal Funds | Between 2001 and 2020, for every $1 of federal Historic Preservation Fund funding available for competitive grants, an additional $1.86 was requested, indicating high demand. The Save America's Treasures program has seen every $1 of federal funding matched by an additional $1.57 in private or other funding since 2001. |
| Property Values | Studies have shown that historic designation can increase property values. For example, a study in Dubuque, Iowa, found that the average annual growth rate for the value of neighboring historic properties was 9.7%, compared to 3.7% for other properties in the downtown area between 2000 and 2007. |
Visualizing the Processes and Relationships
The this compound Section 106 Review Process
The following diagram illustrates the key steps in the Section 106 review process, which is the primary mechanism through which the this compound's standards are applied to federal projects.
Relationship of Key International Historic Preservation Charters
This diagram illustrates the influential relationship between key international charters, showing how they have built upon one another over time.
References
Navigating the Advisory Council on Historic Preservation: A Guide for Researchers
For researchers, scientists, and professionals in drug development, understanding the landscape of regulatory and advisory bodies is paramount. This guide clarifies the role of the Advisory Council on Historic Preservation (ACHP) and explains the nature of the scientific reports it reviews, addressing a common misconception about its function in relation to pharmaceutical and biomedical research.
The Advisory Council on Historic Preservation is an independent federal agency whose primary mission is to promote the preservation, enhancement, and sustainable use of the nation's diverse historic resources. The this compound advises the President and Congress on national historic preservation policy. A key responsibility of the this compound is overseeing the Section 106 review process.
The Section 106 Review Process: A Procedural Overview
The Section 106 review process is a crucial step for any federal undertaking that has the potential to affect historic properties.[1] This process requires federal agencies to consider the effects of their projects on historic properties and consult with interested parties to find ways to avoid, minimize, or mitigate any adverse effects.[1][2] The process is outlined in the regulations of the this compound, "Protection of Historic Properties" (36 CFR Part 800).
The four main steps in the Section 106 process are:
-
Initiation of the Section 106 Process: The federal agency notifies relevant parties, including the State Historic Preservation Officer (SHPO) or Tribal Historic Preservation Officer (THPO), and other consulting parties.[1]
-
Identification of Historic Properties: The agency identifies historic properties within the project's area of potential effects.[1]
-
Assessment of Adverse Effects: The agency assesses whether the undertaking will have an adverse effect on the identified historic properties.
-
Resolution of Adverse Effects: If adverse effects are identified, the agency consults with the SHPO/THPO and other parties to develop and evaluate alternatives or mitigation measures.
This process culminates in a legally binding agreement, typically a Memorandum of Agreement (MOA) or a Programmatic Agreement (PA), that outlines the agreed-upon measures.
Scientific Reports in the Context of the this compound
Scientific reports submitted to and considered by the this compound are fundamentally different from those in the biomedical and pharmaceutical fields. These reports focus on disciplines such as:
-
Archaeology: Involving site surveys, excavations, and cultural resource assessments.
-
Architectural History: Documenting and evaluating the historical significance of buildings and structures.
-
Engineering: Assessing the structural integrity of historic properties and designing appropriate treatments.
-
Environmental Science: Evaluating the impact of projects on the natural and historical landscape.
A 2024 report by the this compound Chair, for instance, focused on recommendations for the application and interpretation of federal historic preservation standards, addressing topics like economic growth, housing, and environmental sustainability in the context of historic resources.
Clarifying the Peer Review Process
While the term "peer review" is central to scientific research, its application within the this compound's purview differs from the rigorous, data-driven peer review common in drug development. The "review" of reports in the Section 106 process is more of a consultative and collaborative process among stakeholders with expertise in historic preservation. It is not a double-blind peer review of experimental data in the way a scientific journal would conduct.
The primary goal of this review is to ensure compliance with the National Historic Preservation Act and to reach a consensus on how to manage the effects of a project on historic properties. The focus is on the soundness of the methodology used to identify and assess these properties and the appropriateness of proposed mitigation measures.
Comparison of Review Processes: this compound vs. Drug Development
To provide clarity for the target audience, the following table compares the review process for reports submitted under Section 106 with the typical peer review process for scientific reports in drug development.
| Feature | This compound Section 106 Review Process | Scientific Peer Review in Drug Development |
| Primary Goal | Compliance with historic preservation laws and resolution of adverse effects on historic properties. | To assess the scientific validity, originality, and significance of research findings for publication or funding. |
| Reviewers | State/Tribal Historic Preservation Officers, federal agency officials, consulting parties with a demonstrated interest, and the this compound. | Independent, anonymous experts in the specific scientific field (e.g., pharmacology, toxicology, clinical medicine). |
| Key Criteria | Adequacy of identification and evaluation of historic properties, assessment of effects, and appropriateness of mitigation measures. | Scientific rigor, methodological soundness, validity of data and conclusions, ethical considerations, and contribution to the field. |
| Outcome | A legally binding agreement (MOA or PA) on measures to avoid, minimize, or mitigate adverse effects. | A decision to accept, revise, or reject the manuscript for publication or a funding decision. |
| Data Type | Primarily qualitative and descriptive data from archaeological surveys, historical research, and architectural assessments. Quantitative data may include measurements and material analysis. | Primarily quantitative data from preclinical experiments, clinical trials, and statistical analyses. |
Visualizing the Workflow
To further illustrate the process, the following diagram outlines the typical workflow of the Section 106 review process.
References
Assessing the Long-Term Success of Mitigation Measures in Historic Preservation
A Comparative Guide for Researchers and Cultural Resource Management Professionals
The long-term success of mitigation measures approved by the Advisory Council on Historic Preservation (ACHP) under Section 106 of the National Historic Preservation Act is a critical area of study for researchers, scientists, and drug development professionals invested in cultural heritage management. This guide provides a comparative analysis of common mitigation approaches, drawing on established performance metrics and methodologies to offer a framework for objective assessment.
Measuring Success: A Framework of Performance Indicators
Evaluating the long-term success of mitigation extends beyond the mere completion of a project. The National Academy of Public Administration's report, "Towards More Meaningful Performance Measures for Historic Preservation," provides a valuable framework for this assessment.[1][2] This framework categorizes measures into outcomes, outputs, and efficiency, allowing for a more holistic understanding of a mitigation measure's impact.
For the purposes of this guide, a selection of these performance measures has been adapted to compare different mitigation strategies. These indicators focus on the durability of preservation outcomes, the dissemination of knowledge, and the efficiency of the mitigation process itself.
Comparative Analysis of Mitigation Alternatives
The following tables summarize the performance of common mitigation alternatives against key long-term success indicators. The data presented is synthesized from a review of this compound guidance, case studies, and academic literature. It is important to note that quantitative data on the long-term performance of mitigation measures is not always consistently tracked or publicly available. Therefore, this comparison relies on a combination of reported outcomes and expert analysis.
| Performance Indicator | Data Recovery (Archaeology) | HABS/HAER/HALS Documentation | Creative Mitigation | Programmatic Agreements (PAs) |
| Preservation of Historic Fabric | Low (by definition, involves excavation and removal of archaeological resources) | Not Applicable (documents, but does not preserve in-situ) | Varies (can range from in-situ preservation to relocation and reuse) | Varies (depends on the stipulations of the agreement) |
| Knowledge Generation & Dissemination | High (generates significant new archaeological data) | High (produces detailed architectural and engineering records) | Varies (can be high if it includes public interpretation and education) | Moderate (can facilitate data sharing and synthesis across multiple projects) |
| Long-Term Public Benefit | Moderate (benefits are primarily academic until published and interpreted for the public) | High (provides a permanent, publicly accessible record) | High (often designed with direct community engagement and benefit in mind) | High (can lead to more predictable and positive preservation outcomes across a program) |
| Efficiency (Time & Cost) | Low (can be time-consuming and expensive) | Moderate (requires specialized expertise and can be costly) | Varies (can be more or less costly than traditional mitigation depending on the approach) | High (streamlines the review process for multiple undertakings)[3][4] |
| Community Engagement | Low to Moderate (often limited to consulting parties) | Low (primarily a technical documentation process) | High (often involves extensive public participation in its development and implementation) | Moderate (PAs are developed in consultation with stakeholders, but individual project review may be streamlined) |
Experimental Protocols and Methodologies
A critical component of assessing mitigation success is understanding the methodologies employed. The following sections detail the standard protocols for the key mitigation alternatives discussed.
Archaeological Data Recovery
Archaeological data recovery is a systematic process for mitigating the adverse effects of a project on an archaeological site by excavating and documenting the site before it is destroyed.[2] The primary goal is to recover significant information that contributes to our understanding of the past.
Key Methodological Steps:
-
Research Design and Treatment Plan: A detailed plan is developed that outlines the research questions to be addressed, the methods of excavation and analysis, and the plan for curation and reporting.
-
Field Excavation: This involves the systematic removal of soil and artifacts, with careful documentation of their context.
-
Laboratory Analysis: Artifacts and samples are cleaned, cataloged, and analyzed to answer the research questions.
-
Reporting and Dissemination: The findings are documented in a comprehensive report and disseminated to the archaeological community and the public.
Historic American Buildings Survey / Historic American Engineering Record / Historic American Landscapes Survey (HABS/HAER/HALS)
HABS, HAER, and HALS are programs administered by the National Park Service to document America's architectural, engineering, and landscape heritage. This documentation serves as a permanent record of historic properties.
Standard Documentation Components:
-
Measured Drawings: Detailed architectural drawings, including plans, elevations, sections, and details, are created to scale.
-
Large-Format Photography: Archival-quality photographs capture the exterior and interior of the property.
-
Written Historical Report: A comprehensive report details the history, significance, and physical description of the property.
Creative Mitigation
Creative mitigation encompasses a wide range of non-traditional approaches to resolving adverse effects that are often tailored to the specific context of the historic property and the community. There is no single protocol for creative mitigation; rather, it is a process of collaborative problem-solving.
Guiding Principles for Developing Creative Mitigation:
-
Consider the Significance: The mitigation should be related to the qualities that make the property significant.
-
Address the Adverse Effect: The mitigation should directly address the harm caused by the project.
-
Involve the Community: Meaningful engagement with the public and consulting parties is essential to developing successful creative mitigation.
-
Provide a Public Benefit: The outcome should offer a tangible and lasting benefit to the public.
Programmatic Agreements (PAs)
Programmatic Agreements are legal documents that streamline the Section 106 review process for a particular program, complex project, or multiple undertakings. They establish a process for consultation, review, and compliance with Section 106.
Key Elements of a Programmatic Agreement:
-
Roles and Responsibilities: Clearly defines the responsibilities of the federal agency, State Historic Preservation Officer (SHPO), Tribal Historic Preservation Officer (THPO), and other consulting parties.
-
Process for Identification and Evaluation: Establishes a streamlined process for identifying and evaluating historic properties.
-
Treatment of Historic Properties: Outlines agreed-upon measures to avoid, minimize, or mitigate adverse effects.
-
Dispute Resolution: Includes procedures for resolving disagreements among the signatories.
-
Monitoring and Reporting: Specifies how the implementation of the agreement will be tracked and reported.
Visualizing the Mitigation Process
The following diagrams, created using the DOT language, illustrate key workflows and relationships in the assessment of mitigation measures.
References
Comparative analysis of software for cultural heritage data management.
In the realm of cultural heritage, the effective management of diverse and complex datasets is paramount for researchers and institutions. The choice of software for managing collections, from archaeological findings to digital archives, significantly impacts data interoperability, research potential, and public engagement. This guide provides a comparative analysis of four prominent software solutions: Omeka S, CollectiveAccess, Arches, and ResearchSpace. While direct, peer-reviewed quantitative performance benchmarks are not widely published, this guide synthesizes available qualitative data and architectural information. It also proposes a standardized experimental protocol for evaluating such systems.
Qualitative and Feature-Based Comparison
The selection of a data management platform often depends on the specific needs of the institution, including the complexity of the data, the technical expertise of the staff, and the goals of the project. The following table summarizes the key characteristics of the four platforms.
| Feature | Omeka S | CollectiveAccess | Arches | ResearchSpace |
| Primary Use Case | Web publishing and digital exhibits for libraries, museums, and archives.[1] | Comprehensive cataloging and collections management for museums and archives with complex requirements. | Enterprise-level inventory and management of immovable cultural heritage (e.g., archaeological sites, historic buildings).[2][3] | Collaborative research environment for humanities and cultural heritage based on semantic web technologies.[4] |
| Data Model | Item-centric, based on Dublin Core, with the ability to incorporate other vocabularies. | Highly configurable relational model, not hardcoded, allowing for custom data fields and complex relationships. | Graph-based, natively implementing the CIDOC Conceptual Reference Model (CRM) ontology. | RDF triple store, natively supporting the CIDOC CRM for semantic data integration. |
| Metadata Standards | Dublin Core is native; supports import of other vocabularies like Schema.org, with potential performance impacts for very large schemas. | Pre-configured with standards like Dublin Core, VRA Core, MARC, and Getty vocabularies; highly customizable. | Based on international standards, primarily CIDOC CRM, to ensure data longevity and interoperability. | Natively supports CIDOC CRM and allows for the integration of various RDF-based ontologies. |
| Scalability | Best suited for small to medium-sized collections; performance can be affected by a large number of vocabularies. | Designed for large, heterogeneous collections with complex cataloging needs. | Characterized as a robust, enterprise-level platform designed for large-scale organizational contexts. | Scalability is dependent on the underlying triple store technology, designed for handling large, interconnected datasets. |
| Ease of Use | User-friendly interface, designed for users without extensive technical knowledge. | Steeper learning curve due to high configurability and comprehensive feature set. | Requires significant technical expertise for installation, configuration, and data modeling. | Geared towards researchers and data specialists comfortable with semantic web concepts. |
| Open Source | Yes, free and open-source. | Yes, free and open-source. | Yes, free and open-source. | Yes, the core platform is open-source. |
Proposed Experimental Protocol for Performance Benchmarking
To provide a framework for quantitative evaluation, the following experimental protocol is proposed. This methodology outlines a series of standardized tests that could be conducted to generate comparable performance data across different cultural heritage data management platforms.
Objective: To quantitatively assess the performance of cultural heritage data management software across key areas of data ingestion, management, and retrieval under controlled conditions.
1. Test Environment Setup:
-
Hardware: All software to be installed on identical virtual machines with specified CPU cores, RAM, and SSD storage.
-
Software: Standardized operating system (e.g., Ubuntu 22.04 LTS), web server (e.g., Apache), and database backend (e.g., MySQL, PostgreSQL) as required by each platform.
-
Dataset: A standardized cultural heritage dataset will be used, consisting of:
-
100,000 object records with Dublin Core metadata.
-
50,000 high-resolution images (average 10MB each).
-
10,000 authority records (people, places).
-
A sample CIDOC CRM-compliant dataset in RDF format (for Arches and ResearchSpace).
-
2. Performance Metrics:
-
Data Ingestion Rate: Time taken to import the entire dataset, measured in records per second.
-
Batch Image Processing Time: Time taken to generate derivatives (thumbnails, medium-sized images) for all imported images.
-
Simple Query Response Time: Average time to execute a simple metadata search (e.g., search for a specific creator).
-
Complex Query Response Time: Average time to execute a complex search involving multiple fields and relationships.
-
API Response Time: Average time for the REST API to respond to a request for a single record.
-
Concurrent User Load Test: System response time and error rate under simulated load from 10, 50, and 100 concurrent users performing typical read/write operations.
3. Experimental Procedure:
-
Installation and Configuration: Install each platform according to its documentation on a clean virtual machine. Configure for optimal performance.
-
Data Ingestion Test: Using the platform's native import tools, ingest the standardized dataset. Record the start and end times.
-
Query Performance Tests: Execute a predefined set of simple and complex queries 100 times each and record the execution time for each.
-
API Performance Test: Make 1000 sequential API requests for individual records and record the response time for each.
-
Load Testing: Use a load testing tool (e.g., Apache JMeter) to simulate concurrent users browsing and searching the collection. Record server response times and error rates over a 30-minute period for each user level.
Logical and Experimental Workflow Visualizations
The following diagrams illustrate key workflows and relationships in cultural heritage data management, created using the DOT language.
References
Validating the Economic Impact of Historic Preservation: A Comparative Guide
The economic contribution of historic preservation is a subject of ongoing research and debate. While numerous studies highlight its positive financial effects, the methodologies employed to validate these claims warrant careful scrutiny. This guide provides a comparative analysis of the common approaches used to measure the economic impact of historic preservation, offering researchers, scientists, and drug development professionals a framework for critically evaluating these studies.
Quantitative Data Summary
The economic benefits of historic preservation are often quantified across several key indicators. The following table summarizes findings from various studies, showcasing the potential economic returns.
| Economic Indicator | Reported Impact | Source (Example) |
| Job Creation | Rehabilitation projects create more labor-intensive work, leading to a stronger residual impact on the economy. For every 100 jobs in new construction, 135 are created elsewhere; the same 100 jobs in rehabilitation create 186 jobs elsewhere.[1] In Texas, historic preservation activities created over 79,000 jobs in 2013.[2] | PlaceEconomics, Texas Historical Commission[1][2] |
| Property Values | Properties in historic districts tend to appreciate in value more than comparable properties in non-historic areas.[3] Studies have shown that historic district designation consistently results in greater appreciation of home values over time and more resilience during economic downturns. In Philadelphia, homes in National Register historic districts have a 14.3% premium. | Multiple Studies |
| Heritage Tourism | A significant driver of local economies, heritage tourism generates revenue for hotels, restaurants, and other businesses. In Texas, heritage tourism accounted for approximately 12.5% of total direct travel spending in 2013, close to $7.3 billion. | Texas Historical Commission |
| Tax Revenue | Historic preservation projects generate revenue for federal, state, and local governments. In Texas, preservation-related economic activity returned $291 million in state and local taxes in one year. | U.S. National Park Service, Texas Historical Commission |
| Gross Domestic Product (GDP) | Historic rehabilitation in Texas adds $1.04 billion to the state's annual GDP. Overall, historic preservation activities in Texas contribute over $4.6 billion annually to the state. | Texas Historical Commission |
Methodologies for Economic Impact Analysis
Several methodologies are employed to quantify the economic impacts of historic preservation. Each has its strengths and weaknesses, and a comprehensive understanding requires a multi-faceted approach.
1. Input-Output (I-O) Models:
-
Description: I-O models are a quantitative tool used to analyze the ripple effects of an initial economic activity (direct impact) throughout a regional economy. They calculate the indirect (inter-industry purchases) and induced (household spending) effects to determine the total economic impact. A specialized I-O model for historic preservation is the Preservation Economic Impact Model (PEIM) , developed by Rutgers University.
-
Application: Used to forecast job creation, changes in income, and contributions to GDP resulting from preservation projects.
-
Data Inputs: Key project characteristics such as location, total development cost, and type of project (e.g., commercial, residential).
-
Limitations: The accuracy of I-O models depends on the quality and granularity of the input data. There is a need for consistent and credible data collection.
2. Econometric Studies:
-
Description: These studies use statistical methods to analyze economic data, such as property values. They aim to isolate the effect of a specific variable (e.g., historic designation) while controlling for other factors that might influence the outcome.
-
Application: Commonly used to determine the impact of historic district designation on property values by comparing them to similar, non-designated areas.
-
Data Inputs: Property sales data, property characteristics, neighborhood amenities, and demographic information.
-
Limitations: Requires large and detailed datasets. The selection of appropriate control groups is critical to the validity of the results.
3. Cost-Benefit Analysis (CBA):
-
Description: CBA compares the total costs of a project (e.g., rehabilitation expenses) with its total benefits (e.g., increased property values, tourism revenue, environmental benefits).
-
Application: Used to evaluate the overall economic feasibility and desirability of a preservation project compared to alternatives like demolition and new construction.
-
Data Inputs: Project costs, projected revenues, and monetized social and environmental benefits.
-
Limitations: Quantifying intangible benefits (e.g., aesthetic value, community identity) in monetary terms can be challenging and subjective.
Experimental Protocols
Validating the findings of economic impact studies requires a rigorous and transparent methodology. While "experiments" in the traditional scientific sense are not always feasible, a well-designed study protocol should include the following elements:
-
Clear Definition of Scope: The geographic area, time period, and specific preservation activities being analyzed must be clearly defined.
-
Baseline Data Collection: Establishing a baseline of economic conditions before the preservation activity is crucial for comparison. This includes data on employment, property values, and tourism in the study area and a comparable control area.
-
Control Group Selection: A key element of robust analysis is the use of a control group—a similar area without the historic preservation intervention. This helps to isolate the effects of preservation from broader economic trends.
-
Data Verification and Triangulation: Data should be sourced from reliable public and private records and, where possible, triangulated with other data sources to ensure accuracy.
-
Model Specification and Assumptions: The specific econometric or input-output model used should be clearly described, along with all underlying assumptions.
-
Sensitivity Analysis: This involves testing how the results change when key assumptions or data inputs are altered, which helps to assess the robustness of the findings.
Visualizing Methodological Relationships and Workflows
Logical Relationships of Validation Methodologies
References
How to benchmark research findings against ACHP's established guidelines.
- 1. This compound - Wikipedia [en.wikipedia.org]
- 2. procurementsciences.com [procurementsciences.com]
- 3. Federal Register :: Agencies - Advisory Council on Historic Preservation [federalregister.gov]
- 4. Advisory Council on Historic Preservation (this compound) | BroadbandUSA [broadbandusa.ntia.gov]
- 5. This compound.org [this compound.org]
Safety Operating Guide
Proper Disposal of ACHP: A Critical Guide for Laboratory Professionals
The proper disposal of chemical reagents is a critical component of laboratory safety and environmental responsibility. The acronym "ACHP" can refer to more than one distinct chemical entity, and it is imperative for researchers, scientists, and drug development professionals to correctly identify the specific compound in use before proceeding with any handling or disposal protocols. This guide provides detailed disposal procedures for two compounds that may be identified as this compound: Acetylcholine Perchlorate and this compound (IKK-2 Inhibitor VIII).
Crucial First Step: Chemical Identification
Before proceeding, verify the full chemical name and CAS number of the compound you are working with. The disposal procedures for these substances are significantly different, and incorrect disposal can lead to hazardous situations and regulatory non-compliance.
Section 1: Acetylcholine Perchlorate
Acetylcholine perchlorate is the perchlorate salt of acetylcholine. Perchlorates can be reactive and require specific disposal methods.
Chemical and Safety Data for Acetylcholine Perchlorate
| Property | Value |
| CAS Number | 927-86-6 |
| Molecular Formula | C₇H₁₆NO₂·ClO₄ |
| Molecular Weight | 245.66 g/mol |
| Primary Hazards | May be combustible. Handle with care to avoid dust formation. |
| Personal Protective Equipment | Safety glasses, gloves, and a lab coat should be worn. Use in a well-ventilated area. |
Experimental Protocol: Disposal of Acetylcholine Perchlorate
The following step-by-step procedure should be followed for the disposal of acetylcholine perchlorate:
-
Consult a Licensed Professional Waste Disposal Service : The primary and most critical step is to contact a licensed professional waste disposal service to handle the material.[1]
-
Preparation for Incineration : If directed by the waste disposal service, the material may be prepared for incineration. This involves dissolving or mixing the acetylcholine perchlorate with a combustible solvent.[1]
-
Chemical Incineration : The mixture should be burned in a chemical incinerator equipped with an afterburner and a scrubber to neutralize harmful combustion byproducts.[1]
-
Disposal of Contaminated Packaging : Any packaging that has been in contact with acetylcholine perchlorate should be disposed of in accordance with all prevailing country, federal, state, and local regulations.[1] This may involve recycling or disposal as hazardous waste.
-
Regulatory Compliance : Ensure that all disposal activities are in full compliance with local, state, and federal regulations.
Disposal Workflow for Acetylcholine Perchlorate
Caption: Disposal workflow for Acetylcholine Perchlorate.
Section 2: this compound (IKK-2 Inhibitor VIII)
This compound, in the context of biochemical research, often refers to a potent and selective IKK-β inhibitor. As a biologically active small molecule, it requires careful handling and disposal as hazardous waste.
Chemical and Safety Data for this compound (IKK-2 Inhibitor VIII)
| Property | Value |
| CAS Number | 406208-42-2 |
| Molecular Formula | C₂₁H₂₄N₄O₂ |
| Molecular Weight | 364.44 g/mol |
| Primary Hazards | Potent bioactive compound. Potential for unforeseen biological effects. Handle with a high degree of caution. |
| Personal Protective Equipment | A lab coat, safety glasses with side shields, and appropriate chemical-resistant gloves are mandatory. Handle in a certified chemical fume hood. |
Experimental Protocol: Disposal of this compound (IKK-2 Inhibitor VIII)
Due to its high potency, a specific Safety Data Sheet (SDS) should always be consulted. In the absence of a specific SDS, the following general best practices for the disposal of potent kinase inhibitors should be followed.
-
Consult Institutional Safety Office : Before beginning any disposal procedure, it is essential to consult your institution's Environmental Health and Safety (EHS) office for specific guidance on disposing of potent chemical inhibitors.
-
Solid Waste Disposal :
-
Unused or Expired Compound : Keep the compound in its original, securely sealed container. If repackaging is necessary, use a clearly labeled, compatible container. The label must include the chemical name ("this compound" or "IKK-2 Inhibitor VIII"), CAS number (406208-42-2), and any known hazard warnings.
-
Contaminated Labware : All disposable items that have come into direct contact with this compound (e.g., pipette tips, microfuge tubes, gloves) should be collected as solid hazardous waste in a designated, durable, and clearly labeled hazardous waste bag or container.
-
-
Liquid Waste Disposal :
-
Solutions : Collect all solutions containing this compound (e.g., dissolved in DMSO) in a dedicated, sealed, and clearly labeled hazardous waste container. The label should indicate the solvent and the solute with its approximate concentration.
-
No Drain Disposal : Under no circumstances should solutions containing this compound be disposed of down the drain.
-
-
Decontamination of Glassware and Surfaces :
-
Initial Rinse : Rinse contaminated glassware and surfaces with a suitable solvent that will solubilize this compound (e.g., ethanol or DMSO), collecting the rinsate as hazardous waste.
-
Secondary Wash : Wash with an appropriate laboratory detergent and water.
-
Final Rinse : Rinse thoroughly with water.
-
-
Waste Pickup : Arrange for the disposal of all hazardous waste containers through your institution's EHS-approved waste pickup service.
Disposal Workflow for this compound (IKK-2 Inhibitor VIII)
Caption: Disposal workflow for this compound (IKK-2 Inhibitor VIII).
Conclusion: Prioritizing Safety Through Diligence
The safe disposal of laboratory chemicals is paramount for the protection of personnel and the environment. The ambiguity of acronyms like "this compound" underscores the need for meticulous record-keeping and clear labeling of all chemical reagents. Always consult the specific Safety Data Sheet for the compound and adhere to the guidelines provided by your institution's Environmental Health and Safety department. By following these established protocols, you contribute to a safer research environment and ensure regulatory compliance.
References
Essential Safety and Operational Guide for Handling ACHP
For Researchers, Scientists, and Drug Development Professionals
This guide provides crucial safety and logistical information for the handling of ACHP (2-Amino-6-[2-(cyclopropylmethoxy)-6-hydroxyphenyl]-4-(4-piperidinyl)-3-pyridinecarbonitrile), a selective IκB kinase (IKK) inhibitor. Given the absence of a specific Safety Data Sheet (SDS) for this compound, this document compiles best practices based on the known hazards of structurally similar compounds, such as aminopyridines and pyridine carbonitriles. These related compounds are generally considered hazardous, and a conservative approach to handling this compound is strongly recommended.
Hazard Assessment and Personal Protective Equipment (PPE)
Due to the potential hazards associated with the aminopyridine and pyridine carbonitrile moieties, this compound should be handled with caution. The primary risks are anticipated to be acute toxicity if swallowed, skin or eye irritation upon contact, and potential respiratory irritation from aerosolized particles.[1][2][3][4][5]
A comprehensive PPE strategy is mandatory to minimize exposure. The following table summarizes the required PPE for various laboratory operations involving this compound.
| Operation | Eye Protection | Hand Protection | Body Protection | Respiratory Protection |
| Weighing and preparing solutions | Safety glasses with side shields or chemical splash goggles. | Nitrile gloves (double-gloving recommended). | Standard laboratory coat. | Required if not handled in a certified chemical fume hood. Use a NIOSH-approved respirator with an appropriate cartridge for organic vapors and particulates. |
| Cell culture and in vitro assays | Safety glasses with side shields. | Nitrile gloves. | Standard laboratory coat. | Not generally required if performed in a biological safety cabinet. |
| In vivo studies (animal handling) | Safety glasses with side shields. | Nitrile gloves. | Disposable gown over laboratory coat. | Recommended during procedures with a high risk of aerosol generation. |
| Spill cleanup | Chemical splash goggles and a face shield. | Heavy-duty nitrile or butyl rubber gloves. | Chemical-resistant apron or coveralls over a laboratory coat. | NIOSH-approved respirator with an appropriate cartridge for organic vapors and particulates. |
Safe Handling and Operational Plan
Adherence to a strict operational plan is critical for the safe handling of this compound.
2.1. Engineering Controls:
-
Ventilation: All handling of solid this compound and preparation of stock solutions must be conducted in a certified chemical fume hood to minimize inhalation exposure.
-
Eyewash and Safety Shower: Ensure that a functional eyewash station and safety shower are readily accessible in the immediate work area.
2.2. Procedural Guidance:
-
Avoid Contamination: Do not eat, drink, or smoke in laboratory areas where this compound is handled.
-
Personal Hygiene: Wash hands thoroughly with soap and water after handling this compound and before leaving the laboratory.
-
Labeling: All containers of this compound, including stock solutions and dilutions, must be clearly labeled with the chemical name, concentration, date of preparation, and appropriate hazard warnings.
-
Transportation: When transporting this compound within the laboratory, use secondary containment to prevent spills.
Spill Management Protocol
In the event of an this compound spill, immediate and appropriate action is necessary to prevent exposure and contamination.
3.1. Spill Response Workflow:
Caption: Workflow for responding to an this compound spill.
Disposal Plan
Proper disposal of this compound and associated waste is crucial to prevent environmental contamination and ensure regulatory compliance.
4.1. Waste Segregation:
-
Solid Waste: Unused or expired solid this compound, as well as grossly contaminated items (e.g., weigh boats, pipette tips), should be collected in a designated, sealed, and clearly labeled hazardous waste container.
-
Liquid Waste: Aqueous solutions containing this compound should be collected in a separate, sealed, and labeled hazardous liquid waste container. Avoid mixing with other chemical waste streams unless compatibility is confirmed.
-
Sharps: Needles, syringes, and other contaminated sharps must be disposed of in a designated sharps container.
4.2. Disposal Procedure: All waste containing this compound must be disposed of as hazardous chemical waste through your institution's Environmental Health and Safety (EHS) office. Do not pour this compound solutions down the drain. Follow all local, state, and federal regulations for hazardous waste disposal.
Experimental Protocols
While specific experimental protocols will vary, the following general guidelines should be integrated into your standard operating procedures (SOPs) for handling this compound.
5.1. Preparation of Stock Solutions:
-
Calculate the required mass of this compound for the desired concentration and volume.
-
Perform all weighing and handling of solid this compound within a chemical fume hood.
-
Use an anti-static weigh boat or paper.
-
Carefully transfer the weighed this compound to an appropriate container (e.g., a conical tube or vial).
-
Add the desired solvent (e.g., DMSO) dropwise to the solid to avoid aerosolization, then add the remaining solvent to the final volume.
-
Cap the container securely and vortex or sonicate until the this compound is fully dissolved.
-
Label the stock solution container clearly with the chemical name, concentration, solvent, date, and your initials.
-
Store the stock solution at the recommended temperature, protected from light if necessary.
This guide is intended to provide a framework for the safe handling of this compound. It is not a substitute for a compound-specific SDS. Always consult with your institution's EHS department for specific guidance and training before working with any new chemical.
References
Retrosynthesis Analysis
AI-Powered Synthesis Planning: Our tool employs the Template_relevance Pistachio, Template_relevance Bkms_metabolic, Template_relevance Pistachio_ringbreaker, Template_relevance Reaxys, Template_relevance Reaxys_biocatalysis model, leveraging a vast database of chemical reactions to predict feasible synthetic routes.
One-Step Synthesis Focus: Specifically designed for one-step synthesis, it provides concise and direct routes for your target compounds, streamlining the synthesis process.
Accurate Predictions: Utilizing the extensive PISTACHIO, BKMS_METABOLIC, PISTACHIO_RINGBREAKER, REAXYS, REAXYS_BIOCATALYSIS database, our tool offers high-accuracy predictions, reflecting the latest in chemical research and data.
Strategy Settings
| Precursor scoring | Relevance Heuristic |
|---|---|
| Min. plausibility | 0.01 |
| Model | Template_relevance |
| Template Set | Pistachio/Bkms_metabolic/Pistachio_ringbreaker/Reaxys/Reaxys_biocatalysis |
| Top-N result to add to graph | 6 |
Feasible Synthetic Routes
Featured Recommendations
| Most viewed | ||
|---|---|---|
| Most popular with customers |
Disclaimer and Information on In-Vitro Research Products
Please be aware that all articles and product information presented on BenchChem are intended solely for informational purposes. The products available for purchase on BenchChem are specifically designed for in-vitro studies, which are conducted outside of living organisms. In-vitro studies, derived from the Latin term "in glass," involve experiments performed in controlled laboratory settings using cells or tissues. It is important to note that these products are not categorized as medicines or drugs, and they have not received approval from the FDA for the prevention, treatment, or cure of any medical condition, ailment, or disease. We must emphasize that any form of bodily introduction of these products into humans or animals is strictly prohibited by law. It is essential to adhere to these guidelines to ensure compliance with legal and ethical standards in research and experimentation.
