Broadleaf Services Teams with Echelon Services and SPAARK to Deliver Strategic Data Analytics Support Services to a DoD Agency

Description of Work Performed: Broadleaf Services supports a broad spectrum of analytical and data visualization, data exploitation, management consulting, and support services necessary for implementing Artificial Intelligence/ Machine Learning (AI/ML) applications in the Contract Administration field. Specifically, Broadleaf Services applies expertise in professional consulting services to assist with the full range of business intelligence program/project management services necessary to implement a data strategy for DCMA, which is expected to evolve over the contract’s period of performance. These capabilities include a wide range of functional data management disciplines, including data exploitation and analysis, data visualization and dashboard strategy, application program and project management, and data management.

Broadleaf Services provides the COR with Monthly and Quarterly Progress Reports, covering all work completed during the reporting period and work planned for the subsequent reporting period. These reports also identify any problems that arose and give a description of how the problems were resolved. For any unresolved issues, Broadleaf Services provides an explanation including a plan and timeframe for resolution.  We monitor progression against the Performance Plan and report any deviations to prevent the need for escalation.

Background of DCMA Analytic Requirement: DCMA Chief Data Officer (CDO) requires additional resources and expertise to efficiently and effectively execute its responsibility for managing the Agency’s data resources and ensuring that DCMA complies with Federal law, Department of Defense (DOD) Directives and oversight, and internal policy and requirements.  Legal and statutory requirements include those specified in the Paperwork Reduction Act, Privacy Act, Federal Records Act, the Freedom of Information Act (FOIA), the Clinger-Cohen Act, and the Data Quality Act. Federal oversight requirements include policy and guidance documents issued by the Office of Management and Budget (OMB), the Government Accountability Office (GAO), and the Executive Office of the President. Internal requirements include those issued by the agency’s Inspector General, the agency’s Director, and the Chief Information Office (CIO).

As a DOD Combat Support Agency, DCMA ensures the integrity of the contracting process and provides a broad range of contract-procurement management and administrative services to ensure a valued product is being delivered promptly and ready for use by America’s Warfighters. DCMA works directly with Defense suppliers to ensure that DOD, Federal, and allied Government supplies and services are delivered on time, at projected cost, and meet all performance requirements. With headquarters in Fort Gregg-Adams, Virginia, DCMA employs approximately 10,000 civilian and military professionals with over 800 distinct and supported employee duty stations worldwide.  DCMA’s Chief Data and Analytics Office includes approximately 25 Government and contractor personnel around the world.

Broadleaf Services provides professional services in the following areas:

  • Advanced Analytics
  • Data Strategy Development and Implementation
  • Data Science
  • Performance Management Integration
  • Data Analytics
  • Dashboard and Data Visualization Support
  • Data Visualization Sub-tasks
  • Application Development Support
  • Application Development Support Sub-Tasks
  •  Artificial Intelligence (AI) and Machine Learning (ML) Planning and Implementation
  • Enterprise Architecture (Data Architecture and Data Management)
  • Research and Analysis of the Current State
  • Conceptual Architecture Diagrams and Artifacts

 

Transforming Government Operations: The Revolutionary Impact of AI Technologies

By Broadleaf Services

In an era where digital transformation is not just a buzzword but a necessity, Artificial Intelligence (AI) stands at the forefront of this revolution, especially in the realm of government operations. As a government IT contractor, I’ve witnessed firsthand the transformative power of AI in redefining how government services are delivered and decisions are made. This blog post delves into the myriad ways AI technologies are revolutionizing government operations, enhancing efficiency, and paving the way for faster, more accurate decision-making.

The AI Revolution in Government Operations

Enhanced Efficiency and Productivity

AI technologies are instrumental in automating routine tasks, from data entry to complex analytics. This automation not only speeds up processes but also minimizes human error, leading to more efficient and reliable government operations. For instance, AI-driven chatbots are now handling citizen queries, freeing up human resources for more complex tasks that require human empathy and understanding.

Improved Decision-Making

AI’s ability to process and analyze vast amounts of data far exceeds human capabilities. Governments are leveraging AI to sift through big data, deriving insights that inform policymaking and resource allocation. This data-driven approach ensures that decisions are based on comprehensive analysis, leading to more effective and targeted policies.

Predictive Analytics for Proactive Governance

AI’s predictive capabilities are a game-changer for government operations. By analyzing trends and patterns, AI can forecast potential issues, from public health crises to infrastructure needs, allowing governments to adopt a proactive rather than reactive approach. This foresight is crucial in resource planning and crisis management.

Enhancing Public Safety and Security

AI technologies play a pivotal role in public safety, from smart surveillance systems that enhance security to predictive policing tools that help in crime prevention. AI-driven systems can analyze data from various sources, identify potential threats, and enable quicker, more effective responses.

Challenges in AI Integration

Despite the benefits, integrating AI into government operations is not without its challenges.

    1. Data Privacy and Security

 With AI systems handling vast amounts of sensitive data, ensuring privacy and security is paramount. Governments must establish robust data governance frameworks to protect citizen data from breaches and misuse.

    1. Ethical Considerations and Bias:

AI systems are only as unbiased as the data they are fed. There’s a growing concern about AI algorithms perpetuating existing biases, leading to unfair or unethical outcomes. Ensuring AI ethics and fairness is a significant challenge that needs continuous attention.

    1. Skill Gap and Workforce Transformation:

The shift towards AI-driven operations requires a workforce skilled in new technologies. This transition poses a challenge in terms of retraining and reskilling employees to work alongside AI systems effectively.

    1. Integration with Existing Systems:

Integrating AI technologies with legacy systems in government poses technical and compatibility challenges. Seamless integration is crucial for maximizing the benefits of AI.

Conclusion

The integration of AI into government operations is not just a futuristic concept but a present reality. The benefits of AI in enhancing efficiency, improving decision-making, and enabling proactive governance are immense. However, navigating the challenges of data privacy, ethical considerations, workforce transformation, and technical integration is crucial for realizing the full potential of AI in government operations. As we continue to embrace this AI revolution, it’s essential to approach it with a balanced view, addressing challenges while harnessing its transformative power for the greater good of public service and governance.

Embracing AI in government operations is a journey, not a destination. It requires continuous learning, adaptation, and collaboration. I invite you to join the conversation – share your thoughts, experiences, and insights on how AI is transforming government operations in your sphere. Let’s collaborate to make the AI revolution in government a success for all.

Navigating the Ethical Landscape of AI in Government: Balancing Innovation with Integrity

By Broadleaf Services

The integration of Artificial Intelligence (AI) in government operations marks a significant leap forward in public service efficiency and decision-making. However, this technological advancement brings with it a complex array of ethical considerations that must be addressed to maintain citizen trust and safety. In this blog post, we delve into the critical ethical issues of privacy, bias, and transparency in AI applications within government sectors, emphasizing the urgent need for robust ethical frameworks.

The Ethical Imperatives of AI in Government

Privacy Concerns in the Age of AI

AI systems, with their unparalleled data processing capabilities, can inadvertently become tools that infringe on individual privacy. Governments collect and store vast amounts of personal data, and the use of AI to analyze this data raises significant privacy concerns. Ensuring that AI systems respect citizen privacy and comply with data protection laws is paramount. This involves implementing strict data governance policies and ensuring that AI algorithms are designed to protect personal information from unauthorized access or misuse.

Combating Bias and Ensuring Fairness

AI systems are only as unbiased as the data they are trained on. There is a growing concern that AI, if not carefully managed, can perpetuate existing societal biases, leading to discriminatory outcomes in areas like law enforcement, social welfare, and public services. Governments must prioritize the development of AI systems that are fair and impartial. This involves auditing datasets for bias, developing diverse training datasets, and continuously monitoring AI systems for discriminatory patterns.

Transparency and Accountability in AI Systems

The ‘black box’ nature of many AI algorithms poses a significant challenge to transparency and accountability. For citizens to trust AI-driven government decisions, they need to understand how these decisions are made. Ensuring transparency in AI processes and being accountable for AI-driven outcomes is crucial. This can be achieved by implementing explainable AI (XAI) practices, where AI decisions can be understood and explained in human terms.

The Need for Ethical Frameworks

Developing and implementing ethical frameworks for AI in government is not just a recommendation but a necessity. These frameworks should:

– Establish Clear Ethical Guidelines: Define what constitutes ethical AI use within government operations, including respect for privacy, fairness, and transparency.

– Ensure Regulatory Compliance: Align AI practices with existing laws and regulations, and adapt policies to accommodate the evolving nature of AI technologies.

– Promote Cross-Sector Collaboration: Encourage collaboration between government entities, AI developers, ethicists, and the public to address ethical challenges comprehensively.

– Foster Continuous Learning and Adaptation: Recognize that AI ethics is a rapidly evolving field and commit to ongoing learning and adaptation of ethical standards.

Conclusion

As AI continues to reshape government operations, navigating its ethical landscape becomes increasingly critical. Addressing privacy concerns, combating bias, and ensuring transparency are not just ethical imperatives but foundational elements for building citizen trust and safety in AI-driven government services. The development and implementation of robust ethical frameworks are essential to harness the benefits of AI while safeguarding the values of our society.

The journey towards ethical AI in government requires collective effort and continuous dialogue. I encourage policymakers, technologists, ethicists, and citizens to engage in this crucial conversation. Share your insights, raise concerns, and contribute to developing frameworks that ensure AI in government is not only efficient and innovative but also ethical and just. Let’s work together to create a future where AI serves the public good, respecting our rights, values, and dignity.

CISA Releases New Identity and Access Management Guidance

Source: www.securityweek.com.

The US Cybersecurity and Infrastructure Security Agency (CISA) has released new guidance on how federal agencies can integrate identity and access management (IDAM) capabilities into their identity, credential, and access management (ICAM) architectures.

The new document (PDF) was released as part of CISA’s Continuous Diagnostics and Mitigation (CDM) program, which provides information security continuous monitoring (ISCM) capabilities to help federal agencies improve the security of their networks.

“There is no singular, authoritative, recognized way to architect an ICAM capability across an enterprise, which results in many U.S. government agencies approaching this from different directions with different priorities. Compounding this issue, agency Identity Management maturities vary, especially those related to tool expertise and ICAM-related policies, which may complicate the ongoing CDM integration efforts and lead to incomplete or ineffective ICAM deployments,” CISA notes.

To address this issue, CISA’s new guidance clarifies the CDM program’s IDAM scope, CDM IDAM capabilities, and federal agencies’ ICAM practice areas, and provides a CDM ICAM reference architecture that can be used to deploy a robust and effective ICAM capability with CDM functionality, the agency explains.

CDM IDAM capabilities, CISA notes, include sub-capabilities for privileged access management (PAM), identity lifecycle management (ILM), and mobile identity management (MIM). Non-person entities (NPE) and other non-PKI authenticators are also included, under manage credentials and authentication (CRED).

PAM focuses on the management of privileged human and non-person entities and includes tools for ensuring strong authentication, ILM focuses on the lifecycle management of user identity and associated privileges, while MIM focuses on securing the use of mobile devices.

The CDM ICAM reference architecture, which also includes federation services (this includes additional service endpoints, the identity provider, and the service provider), is also meant to help agencies enable Zero Trust Architecture (ZTA).

The new guidance also details a notional CDM ICAM physical architecture, provides an overview of challenges that CDM ICAM faces, describes how ICAM use cases are implemented in ICAM services and components, and provides a series of recommendations for federal agencies to advance the development of the Identity Pillar of a ZTA.

Federal agencies are encouraged to review CISA’s new guidance and use it for implementing ICAM capabilities.

Can Federal Agencies Meet the 2024 Zero Trust Deadline?

Source: www.federalnewsnetwork.com.

In the realm of federal cybersecurity, change is both inevitable and necessary. The urgency of President Biden’s 2021 Executive Order to implement a zero trust architecture by September 2024 has set the stage for a pivotal transformation. Yet, as the deadline draws near, it’s apparent that while the directive’s intent is clear, the path to its realization is fraught with complexity and challenges.

The zero trust paradigm is a response to escalating threats faced by our nation’s digital infrastructure. However, translating this strategic vision into tangible operational realities is proving to be a formidable challenge. While agency directors and IT leaders alike are championing the cause, the reality is that those responsible for building and maintaining these systems are wrestling with difficult, multifaceted issues and progress is moving slower than anticipated.

That raises an important question: Is the September 2024 deadline still feasible?

In theory, the time frame appears adequate. Yet, it’s crucial to acknowledge the intricate dynamics that arise when integrating a zero trust framework into pre-existing federal IT systems. Federal agencies often operate on a massive scale and their networks have evolved over time, resulting in layers of legacy architecture and technical debt. As these agencies seek to transition to a zero trust architecture, they are confronted with the monumental task of reconfiguring their digital foundations while simultaneously ensuring seamless operations.

Data governance is another central challenge that demands attention. Federal agencies handle an extraordinary volume of sensitive information and establishing a comprehensive data governance framework is paramount. The zero trust model necessitates granular visibility into data flows, user behaviors and system interactions. Achieving this level of visibility requires not only the implementation of sophisticated tools but also a cultural shift in how data is managed and accessed.

The journey to zero trust is further impeded by operational hurdles that are characteristic of large-scale enterprises. The federal landscape encompasses a diverse array of systems, applications and endpoints, all of which need to be evaluated and aligned with the zero trust framework. Legacy systems may lack native support for the security measures mandated by zero trust, requiring complex workarounds or even complete overhauls.

Despite these challenges, it is still possible to meet the 2024 deadline. Here are some best practices that agencies can use to help their teams accelerate the path to zero trust:

  • Secure leadership’s commitment. While senior agency leadership is usually aware of zero trust’s importance, they may not always understand the breadth and depth of IT capabilities required to implement it. That’s why agency leaders must take ownership of assessing and prioritizing the investments required to address IT and security gaps.
  • Get identity management right. While zero trust depends on executing prescribed security practices on multiple dimensions, agency leaders must ensure IT departments have the necessary resources to focus on user identity and access management. Identity is applied to networking, devices, data access, workloads and automation. As a result, getting identity right is foundational to the rest of the zero trust pillars.
  • Modernize data governance. A strong data governance strategy is at the heart of zero trust. Now is the time to invest in data classification, encryption and access controls, while ensuring that data handling policies are well-communicated and consistently enforced.

 

  • Embrace incremental progress. Achieving zero trust won’t happen overnight. Federal agencies should adopt an incremental approach, focusing on securing critical assets and expanding the scope. This allows for measured implementation, minimizes disruptions, and ensures that security improvements are continuous.
  • Prioritize training and education throughout the entire agency. Zero trust isn’t just a journey for security teams. It’s a journey for an entire federal agency and its implementation affects everyone. That’s why leaders must recognize the importance of allocating resources for training and education throughout the entire agency.

The journey to zero trust is undoubtedly complex, but with the right strategies in place it’s one that federal agencies can navigate successfully. While the September 2024 deadline remains a challenge, it can serve as a catalyst for lasting cybersecurity resilience. By acknowledging the unique intricacies of federal agencies and their respective systems, understanding the challenges they face, and implementing thoughtful solutions, agencies can meet the 2024 zero trust deadline and pave the way toward a more secure digital future.

Artificial Intelligence: DOD Needs Department-Wide Guidance to Inform Acquisitions

Source: www.gao.gov.

The Department of Defense is developing artificial intelligence capabilities—computer systems that can do tasks that normally require human intellect.

The private sector has been acquiring AI for years. Thirteen private companies told us about their AI acquisition practices. For example, some companies mentioned the importance of considering intellectual property and data rights when negotiating contracts for AI projects.

Although parts of DOD are already using AI, DOD hasn’t issued department-wide AI acquisitions guidance needed to ensure consistency. We recommended it develop such guidance—considering private company practices as appropriate.

What GAO Found

The Department of Defense (DOD) designated artificial intelligence (AI) a top modernization area and is allocating considerable spending to develop AI tools and capabilities. AI refers to computer systems designed to replicate a range of human functions and continually get better at their assigned tasks. DOD AI capabilities could be used in various ways, for example in identifying potential threats or targets on the battlefield.

GAO obtained information from 13 private sector companies about how they successfully acquire AI capabilities. Elements of these categories, shown below, are also reflected in GAO’s June 2021 AI Accountability Framework report (GAO-21-519SP).

Categories of Factors Selected Companies Reported Considering When Acquiring Artificial Intelligence Capabilities

Categories of Factors Selected Companies Reported Considering When Acquiring Artificial Intelligence Capabilities

Although numerous entities across DOD are acquiring, developing, or already using AI, DOD has not issued department-wide guidance for how its components should approach acquiring AI. DOD is in the process of planning to develop such guidance, but it has not defined concrete plans and has no timeline to do so. The military services also lack AI acquisition-specific guidance, though military officials noted that such guidance would be helpful to navigate the AI acquisition process. Without department-wide and tailored service-level guidance, DOD is missing an opportunity to ensure that it is consistently acquiring AI capabilities in a manner that accounts for the unique challenges associated with AI.

Various DOD components and military services have individually developed or plan to develop their own informal AI acquisition resources. Some of these resources reflect key factors identified by private companies for AI acquisition. For example, DOD’s Chief Digital and AI Officer oversees an AI marketplace known as Tradewind, which is designed to expedite the procurement of AI capabilities. Several Tradewind resources emphasize the need to consider intellectual property and data rights concerns when negotiating contracts for AI capabilities, a key factor identified by the companies GAO interviewed.

Why GAO Did This Study

DOD has begun to pursue increasingly advanced AI capabilities. DOD has historically struggled to acquire weapon systems software, and AI acquisitions pose additional challenges. In February 2022, GAO described the status of DOD’s efforts to develop and acquire AI for weapon systems.

Senate Report 116-236 accompanying the National Defense Authorization Act for Fiscal Year 2021 includes a provision for GAO to review DOD’s AI acquisition efforts. This is the second report in response to that provision. This report examines (1) key factors that selected private companies reported considering when acquiring AI capabilities, and (2) the extent to which DOD has department-wide AI acquisition guidance and how, if at all, this guidance reflects key factors identified by private sector companies.

GAO analyzed information provided by 13 private companies with expertise in designing, developing, and deploying AI systems in various sectors to determine the key factors. GAO also analyzed DOD documentation and compared it with the key factors, and interviewed DOD officials.

Recommendations

GAO is making four recommendations for DOD and the three military departments to develop guidance on acquiring AI capabilities, leveraging private company factors as appropriate. DOD concurred with the recommendations.

Recommendations for Executive Action

Agency Affected Recommendation Status
Department of Defense The Secretary of Defense should ensure that the Chief Digital and AI Officer, in conjunction with other DOD acquisition policy offices as appropriate, prioritize establishing department-wide AI acquisition guidance, including leveraging key private company factors, as appropriate. (Recommendation 1)
Open 
 
When we confirm what actions the agency has taken in response to this recommendation, we will provide updated information.
Department of the Army After DOD issues department-wide AI acquisition guidance, the Secretary of the Army should establish service-specific AI acquisition guidance that includes oversight processes and clear goals for these acquisitions, and leverages key private company factors, as appropriate. (Recommendation 2)
Open 
 
When we confirm what actions the agency has taken in response to this recommendation, we will provide updated information.
Department of the Navy After DOD issues department-wide AI acquisition guidance, the Secretary of the Navy should establish service-specific AI acquisition guidance that includes oversight processes and clear goals for these acquisitions, and leverages key private company factors, as appropriate. (Recommendation 3)
Open 
 
When we confirm what actions the agency has taken in response to this recommendation, we will provide updated information.
Department of the Air Force After DOD issues department-wide AI acquisition guidance, the Secretary of the Air Force should establish service-specific AI acquisition guidance that includes oversight processes and clear goals for these acquisitions, and leverages key private company factors, as appropriate. (Recommendation 4)
Open 
 
When we confirm what actions the agency has taken in response to this recommendation, we will provide updated information.